Moving code to GitHub

This commit is contained in:
Pablo Cingolani 2014-12-19 08:30:46 -05:00
parent 6657143f71
commit 5588192716
3004 changed files with 379370 additions and 0 deletions

165
LICENSE Normal file
View File

@ -0,0 +1,165 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

3
MANIFEST.MF Normal file
View File

@ -0,0 +1,3 @@
Manifest-Version: 1.0
Main-Class: net.sourceforge.jFuzzyLogic.JFuzzyLogic

4
README.txt Normal file
View File

@ -0,0 +1,4 @@
Documentation
http://jfuzzylogic.sourceforge.net

73
README_release.txt Normal file
View File

@ -0,0 +1,73 @@
Release instructions
--------------------
Main JAR file
-------------
1) Create jFuzzyLogic.jar file
Eclipse -> Package explorer -> jFuzzyLogic -> Select file jFuzzyLogic.jardesc -> Right click "Create JAR"
2) Upload JAR file SourceForge (use sf.net menu)
HTML pages
----------
1) Upload HTML pages to SourceForge
cd ~/workspace/jFuzzyLogic
scp index.html pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/
cd ~/workspace/jFuzzyLogic/html
scp *.{html,css} pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html
scp images/*.png pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html/images/
scp videos/*.swf pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html/videos/
scp -R assets dist fcl pdf pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html/
Eclipse plugin
--------------
1) Create small jFuzzyLogic.jar file (it's better to use a small file and not the big JAR file that has all source files)
cd ~/workspace/jFuzzyLogic/
ant
# Check the JAR file
cd
java -jar jFuzzyLogic.jar
2) Copy jFuzzyLogic.jar file to UI project
cp jFuzzyLogic.jar net.sourceforge.jFuzzyLogic.Fcl.ui/lib/jFuzzyLogic.jar
3) Build eclipse update site
In Eclipse:
- In package explorer, refresh all net.sourceforge.jFuzzyLogic.Fcl.* projects
- Open the net.sourceforge.jFuzzyLogic.Fcl.updateSite project
- Delete the contents of the 'plugins' 'features' and dir
cd ~/workspace/net.sourceforge.jFuzzyLogic.Fcl.updateSite
rm -vf *.jar plugins/*.jar features/*.jar
- Open site.xml file
- Go to "Site Map" tab
- Open jFuzzyLogic category and remove the 'feature' (called something like "net.sourceforge.jFuzzyLogic.Fcl.sdk_1.1.0.201212101535.jar"
and add it again (just to be sure)
- Click the "Buid All" button
- Refresh the project (you should see the JAR files in the plugin folders now).
4) Upload Eclipse plugin files to SourceForge (Eclipse update site)
cd ~/workspace/net.sourceforge.jFuzzyLogic.Fcl.updateSite
scp -r . pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/eclipse/

659
antlr_3_1_source/Tool.java Normal file
View File

@ -0,0 +1,659 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr;
import org.antlr.analysis.*;
import org.antlr.codegen.CodeGenerator;
import org.antlr.runtime.misc.Stats;
import org.antlr.tool.*;
import java.io.*;
import java.util.*;
/** The main ANTLR entry point. Read a grammar and generate a parser. */
public class Tool {
public static final String VERSION = "3.1";
public static final String UNINITIALIZED_DIR = "<unset-dir>";
// Input parameters / option
protected List grammarFileNames = new ArrayList();
protected boolean generate_NFA_dot = false;
protected boolean generate_DFA_dot = false;
protected String outputDirectory = UNINITIALIZED_DIR;
protected String libDirectory = ".";
protected boolean debug = false;
protected boolean trace = false;
protected boolean profile = false;
protected boolean report = false;
protected boolean printGrammar = false;
protected boolean depend = false;
protected boolean forceAllFilesToOutputDir = false;
protected boolean deleteTempLexer = true;
// the internal options are for my use on the command line during dev
public static boolean internalOption_PrintGrammarTree = false;
public static boolean internalOption_PrintDFA = false;
public static boolean internalOption_ShowNFAConfigsInDFA = false;
public static boolean internalOption_watchNFAConversion = false;
public static void main(String[] args) {
ErrorManager.info("ANTLR Parser Generator Version " +
VERSION + " (August 12, 2008) 1989-2008");
Tool antlr = new Tool(args);
antlr.process();
if ( ErrorManager.getNumErrors() > 0 ) {
System.exit(1);
}
System.exit(0);
}
public Tool() {
}
public Tool(String[] args) {
processArgs(args);
}
public void processArgs(String[] args) {
if ( args==null || args.length==0 ) {
help();
return;
}
for (int i = 0; i < args.length; i++) {
if (args[i].equals("-o") || args[i].equals("-fo")) {
if (i + 1 >= args.length) {
System.err.println("missing output directory with -fo/-o option; ignoring");
}
else {
if ( args[i].equals("-fo") ) { // force output into dir
forceAllFilesToOutputDir = true;
}
i++;
outputDirectory = args[i];
if ( outputDirectory.endsWith("/") ||
outputDirectory.endsWith("\\") )
{
outputDirectory =
outputDirectory.substring(0,outputDirectory.length()-1);
}
File outDir = new File(outputDirectory);
if( outDir.exists() && !outDir.isDirectory() ) {
ErrorManager.error(ErrorManager.MSG_OUTPUT_DIR_IS_FILE,outputDirectory);
libDirectory = ".";
}
}
}
else if (args[i].equals("-lib")) {
if (i + 1 >= args.length) {
System.err.println("missing library directory with -lib option; ignoring");
}
else {
i++;
libDirectory = args[i];
if ( libDirectory.endsWith("/") ||
libDirectory.endsWith("\\") )
{
libDirectory =
libDirectory.substring(0,libDirectory.length()-1);
}
File outDir = new File(libDirectory);
if( !outDir.exists() ) {
ErrorManager.error(ErrorManager.MSG_DIR_NOT_FOUND,libDirectory);
libDirectory = ".";
}
}
}
else if (args[i].equals("-nfa")) {
generate_NFA_dot=true;
}
else if (args[i].equals("-dfa")) {
generate_DFA_dot=true;
}
else if (args[i].equals("-debug")) {
debug=true;
}
else if (args[i].equals("-trace")) {
trace=true;
}
else if (args[i].equals("-report")) {
report=true;
}
else if (args[i].equals("-profile")) {
profile=true;
}
else if (args[i].equals("-print")) {
printGrammar = true;
}
else if (args[i].equals("-depend")) {
depend=true;
}
else if (args[i].equals("-message-format")) {
if (i + 1 >= args.length) {
System.err.println("missing output format with -message-format option; using default");
}
else {
i++;
ErrorManager.setFormat(args[i]);
}
}
else if (args[i].equals("-Xgrtree")) {
internalOption_PrintGrammarTree=true; // print grammar tree
}
else if (args[i].equals("-Xdfa")) {
internalOption_PrintDFA=true;
}
else if (args[i].equals("-Xnoprune")) {
DFAOptimizer.PRUNE_EBNF_EXIT_BRANCHES=false;
}
else if (args[i].equals("-Xnocollapse")) {
DFAOptimizer.COLLAPSE_ALL_PARALLEL_EDGES=false;
}
else if (args[i].equals("-Xdbgconversion")) {
NFAToDFAConverter.debug = true;
}
else if (args[i].equals("-Xmultithreaded")) {
NFAToDFAConverter.SINGLE_THREADED_NFA_CONVERSION = false;
}
else if (args[i].equals("-Xnomergestopstates")) {
DFAOptimizer.MERGE_STOP_STATES = false;
}
else if (args[i].equals("-Xdfaverbose")) {
internalOption_ShowNFAConfigsInDFA = true;
}
else if (args[i].equals("-Xwatchconversion")) {
internalOption_watchNFAConversion = true;
}
else if (args[i].equals("-XdbgST")) {
CodeGenerator.EMIT_TEMPLATE_DELIMITERS = true;
}
else if (args[i].equals("-Xmaxinlinedfastates")) {
if (i + 1 >= args.length) {
System.err.println("missing max inline dfa states -Xmaxinlinedfastates option; ignoring");
}
else {
i++;
CodeGenerator.MAX_ACYCLIC_DFA_STATES_INLINE = Integer.parseInt(args[i]);
}
}
else if (args[i].equals("-Xm")) {
if (i + 1 >= args.length) {
System.err.println("missing max recursion with -Xm option; ignoring");
}
else {
i++;
NFAContext.MAX_SAME_RULE_INVOCATIONS_PER_NFA_CONFIG_STACK = Integer.parseInt(args[i]);
}
}
else if (args[i].equals("-Xmaxdfaedges")) {
if (i + 1 >= args.length) {
System.err.println("missing max number of edges with -Xmaxdfaedges option; ignoring");
}
else {
i++;
DFA.MAX_STATE_TRANSITIONS_FOR_TABLE = Integer.parseInt(args[i]);
}
}
else if (args[i].equals("-Xconversiontimeout")) {
if (i + 1 >= args.length) {
System.err.println("missing max time in ms -Xconversiontimeout option; ignoring");
}
else {
i++;
DFA.MAX_TIME_PER_DFA_CREATION = Integer.parseInt(args[i]);
}
}
else if (args[i].equals("-Xnfastates")) {
DecisionProbe.verbose=true;
}
else if (args[i].equals("-X")) {
Xhelp();
}
else {
if (args[i].charAt(0) != '-') {
// Must be the grammar file
grammarFileNames.add(args[i]);
}
}
}
}
/*
protected void checkForInvalidArguments(String[] args, BitSet cmdLineArgValid) {
// check for invalid command line args
for (int a = 0; a < args.length; a++) {
if (!cmdLineArgValid.member(a)) {
System.err.println("invalid command-line argument: " + args[a] + "; ignored");
}
}
}
*/
public void process() {
int numFiles = grammarFileNames.size();
boolean exceptionWhenWritingLexerFile = false;
String lexerGrammarFileName = null; // necessary at this scope to have access in the catch below
for (int i = 0; i < numFiles; i++) {
String grammarFileName = (String) grammarFileNames.get(i);
if ( numFiles > 1 && !depend ) {
System.out.println(grammarFileName);
}
try {
if ( depend ) {
BuildDependencyGenerator dep =
new BuildDependencyGenerator(this, grammarFileName);
List outputFiles = dep.getGeneratedFileList();
List dependents = dep.getDependenciesFileList();
//System.out.println("output: "+outputFiles);
//System.out.println("dependents: "+dependents);
System.out.println(dep.getDependencies());
continue;
}
Grammar grammar = getRootGrammar(grammarFileName);
// we now have all grammars read in as ASTs
// (i.e., root and all delegates)
grammar.composite.assignTokenTypes();
grammar.composite.defineGrammarSymbols();
grammar.composite.createNFAs();
generateRecognizer(grammar);
if ( printGrammar ) {
grammar.printGrammar(System.out);
}
if ( report ) {
GrammarReport report = new GrammarReport(grammar);
System.out.println(report.toString());
// print out a backtracking report too (that is not encoded into log)
System.out.println(report.getBacktrackingReport());
// same for aborted NFA->DFA conversions
System.out.println(report.getAnalysisTimeoutReport());
}
if ( profile ) {
GrammarReport report = new GrammarReport(grammar);
Stats.writeReport(GrammarReport.GRAMMAR_STATS_FILENAME,
report.toNotifyString());
}
// now handle the lexer if one was created for a merged spec
String lexerGrammarStr = grammar.getLexerGrammar();
//System.out.println("lexer grammar:\n"+lexerGrammarStr);
if ( grammar.type==Grammar.COMBINED && lexerGrammarStr!=null ) {
lexerGrammarFileName = grammar.getImplicitlyGeneratedLexerFileName();
try {
Writer w = getOutputFile(grammar,lexerGrammarFileName);
w.write(lexerGrammarStr);
w.close();
}
catch (IOException e) {
// emit different error message when creating the implicit lexer fails
// due to write permission error
exceptionWhenWritingLexerFile = true;
throw e;
}
try {
StringReader sr = new StringReader(lexerGrammarStr);
Grammar lexerGrammar = new Grammar();
lexerGrammar.composite.watchNFAConversion = internalOption_watchNFAConversion;
lexerGrammar.implicitLexer = true;
lexerGrammar.setTool(this);
File lexerGrammarFullFile =
new File(getFileDirectory(lexerGrammarFileName),lexerGrammarFileName);
lexerGrammar.setFileName(lexerGrammarFullFile.toString());
lexerGrammar.importTokenVocabulary(grammar);
lexerGrammar.parseAndBuildAST(sr);
sr.close();
lexerGrammar.composite.assignTokenTypes();
lexerGrammar.composite.defineGrammarSymbols();
lexerGrammar.composite.createNFAs();
generateRecognizer(lexerGrammar);
}
finally {
// make sure we clean up
if ( deleteTempLexer ) {
File outputDir = getOutputDirectory(lexerGrammarFileName);
File outputFile = new File(outputDir, lexerGrammarFileName);
outputFile.delete();
}
}
}
}
catch (IOException e) {
if (exceptionWhenWritingLexerFile) {
ErrorManager.error(ErrorManager.MSG_CANNOT_WRITE_FILE,
lexerGrammarFileName, e);
} else {
ErrorManager.error(ErrorManager.MSG_CANNOT_OPEN_FILE,
grammarFileName);
}
}
catch (Exception e) {
ErrorManager.error(ErrorManager.MSG_INTERNAL_ERROR, grammarFileName, e);
}
/*
finally {
System.out.println("creates="+ Interval.creates);
System.out.println("hits="+ Interval.hits);
System.out.println("misses="+ Interval.misses);
System.out.println("outOfRange="+ Interval.outOfRange);
}
*/
}
}
/** Get a grammar mentioned on the command-line and any delegates */
public Grammar getRootGrammar(String grammarFileName)
throws IOException
{
//StringTemplate.setLintMode(true);
// grammars mentioned on command line are either roots or single grammars.
// create the necessary composite in case it's got delegates; even
// single grammar needs it to get token types.
CompositeGrammar composite = new CompositeGrammar();
Grammar grammar = new Grammar(this,grammarFileName,composite);
composite.setDelegationRoot(grammar);
FileReader fr = null;
fr = new FileReader(grammarFileName);
BufferedReader br = new BufferedReader(fr);
grammar.parseAndBuildAST(br);
composite.watchNFAConversion = internalOption_watchNFAConversion;
br.close();
fr.close();
return grammar;
}
/** Create NFA, DFA and generate code for grammar.
* Create NFA for any delegates first. Once all NFA are created,
* it's ok to create DFA, which must check for left-recursion. That check
* is done by walking the full NFA, which therefore must be complete.
* After all NFA, comes DFA conversion for root grammar then code gen for
* root grammar. DFA and code gen for delegates comes next.
*/
protected void generateRecognizer(Grammar grammar) {
String language = (String)grammar.getOption("language");
if ( language!=null ) {
CodeGenerator generator = new CodeGenerator(this, grammar, language);
grammar.setCodeGenerator(generator);
generator.setDebug(debug);
generator.setProfile(profile);
generator.setTrace(trace);
// generate NFA early in case of crash later (for debugging)
if ( generate_NFA_dot ) {
generateNFAs(grammar);
}
// GENERATE CODE
generator.genRecognizer();
if ( generate_DFA_dot ) {
generateDFAs(grammar);
}
List<Grammar> delegates = grammar.getDirectDelegates();
for (int i = 0; delegates!=null && i < delegates.size(); i++) {
Grammar delegate = (Grammar)delegates.get(i);
if ( delegate!=grammar ) { // already processing this one
generateRecognizer(delegate);
}
}
}
}
public void generateDFAs(Grammar g) {
for (int d=1; d<=g.getNumberOfDecisions(); d++) {
DFA dfa = g.getLookaheadDFA(d);
if ( dfa==null ) {
continue; // not there for some reason, ignore
}
DOTGenerator dotGenerator = new DOTGenerator(g);
String dot = dotGenerator.getDOT( dfa.startState );
String dotFileName = g.name+"."+"dec-"+d;
if ( g.implicitLexer ) {
dotFileName = g.name+Grammar.grammarTypeToFileNameSuffix[g.type]+"."+"dec-"+d;
}
try {
writeDOTFile(g, dotFileName, dot);
}
catch(IOException ioe) {
ErrorManager.error(ErrorManager.MSG_CANNOT_GEN_DOT_FILE,
dotFileName,
ioe);
}
}
}
protected void generateNFAs(Grammar g) {
DOTGenerator dotGenerator = new DOTGenerator(g);
Collection rules = g.getAllImportedRules();
rules.addAll(g.getRules());
for (Iterator itr = rules.iterator(); itr.hasNext();) {
Rule r = (Rule) itr.next();
try {
String dot = dotGenerator.getDOT(r.startState);
if ( dot!=null ) {
writeDOTFile(g, r, dot);
}
}
catch (IOException ioe) {
ErrorManager.error(ErrorManager.MSG_CANNOT_WRITE_FILE, ioe);
}
}
}
protected void writeDOTFile(Grammar g, Rule r, String dot) throws IOException {
writeDOTFile(g, r.grammar.name+"."+r.name, dot);
}
protected void writeDOTFile(Grammar g, String name, String dot) throws IOException {
Writer fw = getOutputFile(g, name+".dot");
fw.write(dot);
fw.close();
}
private static void help() {
System.err.println("usage: java org.antlr.Tool [args] file.g [file2.g file3.g ...]");
System.err.println(" -o outputDir specify output directory where all output is generated");
System.err.println(" -fo outputDir same as -o but force even files with relative paths to dir");
System.err.println(" -lib dir specify location of token files");
System.err.println(" -depend generate file dependencies");
System.err.println(" -report print out a report about the grammar(s) processed");
System.err.println(" -print print out the grammar without actions");
System.err.println(" -debug generate a parser that emits debugging events");
System.err.println(" -profile generate a parser that computes profiling information");
System.err.println(" -nfa generate an NFA for each rule");
System.err.println(" -dfa generate a DFA for each decision point");
System.err.println(" -message-format name specify output style for messages");
System.err.println(" -X display extended argument list");
}
private static void Xhelp() {
System.err.println(" -Xgrtree print the grammar AST");
System.err.println(" -Xdfa print DFA as text ");
System.err.println(" -Xnoprune test lookahead against EBNF block exit branches");
System.err.println(" -Xnocollapse collapse incident edges into DFA states");
System.err.println(" -Xdbgconversion dump lots of info during NFA conversion");
System.err.println(" -Xmultithreaded run the analysis in 2 threads");
System.err.println(" -Xnomergestopstates do not merge stop states");
System.err.println(" -Xdfaverbose generate DFA states in DOT with NFA configs");
System.err.println(" -Xwatchconversion print a message for each NFA before converting");
System.err.println(" -XdbgST put tags at start/stop of all templates in output");
System.err.println(" -Xm m max number of rule invocations during conversion");
System.err.println(" -Xmaxdfaedges m max \"comfortable\" number of edges for single DFA state");
System.err.println(" -Xconversiontimeout t set NFA conversion timeout for each decision");
System.err.println(" -Xmaxinlinedfastates m max DFA states before table used rather than inlining");
System.err.println(" -Xnfastates for nondeterminisms, list NFA states for each path");
}
public void setOutputDirectory(String outputDirectory) {
this.outputDirectory = outputDirectory;
}
/** This method is used by all code generators to create new output
* files. If the outputDir set by -o is not present it will be created.
* The final filename is sensitive to the output directory and
* the directory where the grammar file was found. If -o is /tmp
* and the original grammar file was foo/t.g then output files
* go in /tmp/foo.
*
* The output dir -o spec takes precedence if it's absolute.
* E.g., if the grammar file dir is absolute the output dir is given
* precendence. "-o /tmp /usr/lib/t.g" results in "/tmp/T.java" as
* output (assuming t.g holds T.java).
*
* If no -o is specified, then just write to the directory where the
* grammar file was found.
*
* If outputDirectory==null then write a String.
*/
public Writer getOutputFile(Grammar g, String fileName) throws IOException {
if ( outputDirectory==null ) {
return new StringWriter();
}
// output directory is a function of where the grammar file lives
// for subdir/T.g, you get subdir here. Well, depends on -o etc...
File outputDir = getOutputDirectory(g.getFileName());
File outputFile = new File(outputDir, fileName);
if( !outputDir.exists() ) {
outputDir.mkdirs();
}
FileWriter fw = new FileWriter(outputFile);
return new BufferedWriter(fw);
}
public File getOutputDirectory(String fileNameWithPath) {
File outputDir = new File(outputDirectory);
String fileDirectory = getFileDirectory(fileNameWithPath);
if ( outputDirectory!=UNINITIALIZED_DIR ) {
// -o /tmp /var/lib/t.g => /tmp/T.java
// -o subdir/output /usr/lib/t.g => subdir/output/T.java
// -o . /usr/lib/t.g => ./T.java
if ( fileDirectory!=null &&
(new File(fileDirectory).isAbsolute() ||
fileDirectory.startsWith("~")) || // isAbsolute doesn't count this :(
forceAllFilesToOutputDir
)
{
// somebody set the dir, it takes precendence; write new file there
outputDir = new File(outputDirectory);
}
else {
// -o /tmp subdir/t.g => /tmp/subdir/t.g
if ( fileDirectory!=null ) {
outputDir = new File(outputDirectory, fileDirectory);
}
else {
outputDir = new File(outputDirectory);
}
}
}
else {
// they didn't specify a -o dir so just write to location
// where grammar is, absolute or relative
String dir = ".";
if ( fileDirectory!=null ) {
dir = fileDirectory;
}
outputDir = new File(dir);
}
return outputDir;
}
/** Name a file in the -lib dir. Imported grammars and .tokens files */
public String getLibraryFile(String fileName) throws IOException {
return libDirectory+File.separator+fileName;
}
public String getLibraryDirectory() {
return libDirectory;
}
/** Return the directory containing the grammar file for this grammar.
* normally this is a relative path from current directory. People will
* often do "java org.antlr.Tool grammars/*.g3" So the file will be
* "grammars/foo.g3" etc... This method returns "grammars".
*/
public String getFileDirectory(String fileName) {
File f = new File(fileName);
return f.getParent();
}
/** Return a File descriptor for vocab file. Look in library or
* in -o output path. antlr -o foo T.g U.g where U needs T.tokens
* won't work unless we look in foo too.
*/
public File getImportedVocabFile(String vocabName) {
File f = new File(getLibraryDirectory(),
File.separator+
vocabName+
CodeGenerator.VOCAB_FILE_EXTENSION);
if ( f.exists() ) {
return f;
}
return new File(outputDirectory+
File.separator+
vocabName+
CodeGenerator.VOCAB_FILE_EXTENSION);
}
/** If the tool needs to panic/exit, how do we do that? */
public void panic() {
throw new Error("ANTLR panic");
}
/** Return a time stamp string accurate to sec: yyyy-mm-dd hh:mm:ss */
public static String getCurrentTimeStamp() {
GregorianCalendar calendar = new java.util.GregorianCalendar();
int y = calendar.get(Calendar.YEAR);
int m = calendar.get(Calendar.MONTH)+1; // zero-based for months
int d = calendar.get(Calendar.DAY_OF_MONTH);
int h = calendar.get(Calendar.HOUR_OF_DAY);
int min = calendar.get(Calendar.MINUTE);
int sec = calendar.get(Calendar.SECOND);
String sy = String.valueOf(y);
String sm = m<10?"0"+m:String.valueOf(m);
String sd = d<10?"0"+d:String.valueOf(d);
String sh = h<10?"0"+h:String.valueOf(h);
String smin = min<10?"0"+min:String.valueOf(min);
String ssec = sec<10?"0"+sec:String.valueOf(sec);
return new StringBuffer().append(sy).append("-").append(sm).append("-")
.append(sd).append(" ").append(sh).append(":").append(smin)
.append(":").append(ssec).toString();
}
}

View File

@ -0,0 +1,56 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.GrammarAST;
import org.antlr.tool.Grammar;
public class ActionLabel extends Label {
public GrammarAST actionAST;
public ActionLabel(GrammarAST actionAST) {
super(ACTION);
this.actionAST = actionAST;
}
public boolean isEpsilon() {
return true; // we are to be ignored by analysis 'cept for predicates
}
public boolean isAction() {
return true;
}
public String toString() {
return "{"+actionAST+"}";
}
public String toString(Grammar g) {
return toString();
}
}

View File

@ -0,0 +1,40 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** An NFA configuration context stack overflowed. */
public class AnalysisRecursionOverflowException extends RuntimeException {
public DFAState ovfState;
public NFAConfiguration proposedNFAConfiguration;
public AnalysisRecursionOverflowException(DFAState ovfState,
NFAConfiguration proposedNFAConfiguration)
{
this.ovfState = ovfState;
this.proposedNFAConfiguration = proposedNFAConfiguration;
}
}

View File

@ -0,0 +1,36 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** Analysis took too long; bail out of entire DFA construction. */
public class AnalysisTimeoutException extends RuntimeException {
public DFA abortedDFA;
public AnalysisTimeoutException(DFA abortedDFA) {
this.abortedDFA = abortedDFA;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,265 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.Grammar;
import org.antlr.misc.Utils;
import java.util.HashSet;
import java.util.Set;
/** A module to perform optimizations on DFAs.
*
* I could more easily (and more quickly) do some optimizations (such as
* PRUNE_EBNF_EXIT_BRANCHES) during DFA construction, but then it
* messes up the determinism checking. For example, it looks like
* loop exit branches are unreachable if you prune exit branches
* during DFA construction and before determinism checks.
*
* In general, ANTLR's NFA->DFA->codegen pipeline seems very robust
* to me which I attribute to a uniform and consistent set of data
* structures. Regardless of what I want to "say"/implement, I do so
* within the confines of, for example, a DFA. The code generator
* can then just generate code--it doesn't have to do much thinking.
* Putting optimizations in the code gen code really starts to make
* it a spagetti factory (uh oh, now I'm hungry!). The pipeline is
* very testable; each stage has well defined input/output pairs.
*
* ### Optimization: PRUNE_EBNF_EXIT_BRANCHES
*
* There is no need to test EBNF block exit branches. Not only is it
* an unneeded computation, but counter-intuitively, you actually get
* better errors. You can report an error at the missing or extra
* token rather than as soon as you've figured out you will fail.
*
* Imagine optional block "( DOT CLASS )? SEMI". ANTLR generates:
*
* int alt=0;
* if ( input.LA(1)==DOT ) {
* alt=1;
* }
* else if ( input.LA(1)==SEMI ) {
* alt=2;
* }
*
* Clearly, since Parser.match() will ultimately find the error, we
* do not want to report an error nor do we want to bother testing
* lookahead against what follows the (...)? We want to generate
* simply "should I enter the subrule?":
*
* int alt=2;
* if ( input.LA(1)==DOT ) {
* alt=1;
* }
*
* NOTE 1. Greedy loops cannot be optimized in this way. For example,
* "(greedy=false:'x'|.)* '\n'". You specifically need the exit branch
* to tell you when to terminate the loop as the same input actually
* predicts one of the alts (i.e., staying in the loop).
*
* NOTE 2. I do not optimize cyclic DFAs at the moment as it doesn't
* seem to work. ;) I'll have to investigate later to see what work I
* can do on cyclic DFAs to make them have fewer edges. Might have
* something to do with the EOT token.
*
* ### PRUNE_SUPERFLUOUS_EOT_EDGES
*
* When a token is a subset of another such as the following rules, ANTLR
* quietly assumes the first token to resolve the ambiguity.
*
* EQ : '=' ;
* ASSIGNOP : '=' | '+=' ;
*
* It can yield states that have only a single edge on EOT to an accept
* state. This is a waste and messes up my code generation. ;) If
* Tokens rule DFA goes
*
* s0 -'='-> s3 -EOT-> s5 (accept)
*
* then s5 should be pruned and s3 should be made an accept. Do NOT do this
* for keyword versus ID as the state with EOT edge emanating from it will
* also have another edge.
*
* ### Optimization: COLLAPSE_ALL_INCIDENT_EDGES
*
* Done during DFA construction. See method addTransition() in
* NFAToDFAConverter.
*
* ### Optimization: MERGE_STOP_STATES
*
* Done during DFA construction. See addDFAState() in NFAToDFAConverter.
*/
public class DFAOptimizer {
public static boolean PRUNE_EBNF_EXIT_BRANCHES = true;
public static boolean PRUNE_TOKENS_RULE_SUPERFLUOUS_EOT_EDGES = true;
public static boolean COLLAPSE_ALL_PARALLEL_EDGES = true;
public static boolean MERGE_STOP_STATES = true;
/** Used by DFA state machine generator to avoid infinite recursion
* resulting from cycles int the DFA. This is a set of int state #s.
* This is a side-effect of calling optimize; can't clear after use
* because code gen needs it.
*/
protected Set visited = new HashSet();
protected Grammar grammar;
public DFAOptimizer(Grammar grammar) {
this.grammar = grammar;
}
public void optimize() {
// optimize each DFA in this grammar
for (int decisionNumber=1;
decisionNumber<=grammar.getNumberOfDecisions();
decisionNumber++)
{
DFA dfa = grammar.getLookaheadDFA(decisionNumber);
optimize(dfa);
}
}
protected void optimize(DFA dfa) {
if ( dfa==null ) {
return; // nothing to do
}
/*
System.out.println("Optimize DFA "+dfa.decisionNFAStartState.decisionNumber+
" num states="+dfa.getNumberOfStates());
*/
//long start = System.currentTimeMillis();
if ( PRUNE_EBNF_EXIT_BRANCHES && dfa.canInlineDecision() ) {
visited.clear();
int decisionType =
dfa.getNFADecisionStartState().decisionStateType;
if ( dfa.isGreedy() &&
(decisionType==NFAState.OPTIONAL_BLOCK_START ||
decisionType==NFAState.LOOPBACK) )
{
optimizeExitBranches(dfa.startState);
}
}
// If the Tokens rule has syntactically ambiguous rules, try to prune
if ( PRUNE_TOKENS_RULE_SUPERFLUOUS_EOT_EDGES &&
dfa.isTokensRuleDecision() &&
dfa.probe.stateToSyntacticallyAmbiguousTokensRuleAltsMap.size()>0 )
{
visited.clear();
optimizeEOTBranches(dfa.startState);
}
/* ack...code gen needs this, cannot optimize
visited.clear();
unlinkUnneededStateData(dfa.startState);
*/
//long stop = System.currentTimeMillis();
//System.out.println("minimized in "+(int)(stop-start)+" ms");
}
protected void optimizeExitBranches(DFAState d) {
Integer sI = Utils.integer(d.stateNumber);
if ( visited.contains(sI) ) {
return; // already visited
}
visited.add(sI);
int nAlts = d.dfa.getNumberOfAlts();
for (int i = 0; i < d.getNumberOfTransitions(); i++) {
Transition edge = (Transition) d.transition(i);
DFAState edgeTarget = ((DFAState)edge.target);
/*
System.out.println(d.stateNumber+"-"+
edge.label.toString(d.dfa.nfa.grammar)+"->"+
edgeTarget.stateNumber);
*/
// if target is an accept state and that alt is the exit alt
if ( edgeTarget.isAcceptState() &&
edgeTarget.getUniquelyPredictedAlt()==nAlts)
{
/*
System.out.println("ignoring transition "+i+" to max alt "+
d.dfa.getNumberOfAlts());
*/
d.removeTransition(i);
i--; // back up one so that i++ of loop iteration stays within bounds
}
optimizeExitBranches(edgeTarget);
}
}
protected void optimizeEOTBranches(DFAState d) {
Integer sI = Utils.integer(d.stateNumber);
if ( visited.contains(sI) ) {
return; // already visited
}
visited.add(sI);
for (int i = 0; i < d.getNumberOfTransitions(); i++) {
Transition edge = (Transition) d.transition(i);
DFAState edgeTarget = ((DFAState)edge.target);
/*
System.out.println(d.stateNumber+"-"+
edge.label.toString(d.dfa.nfa.grammar)+"->"+
edgeTarget.stateNumber);
*/
// if only one edge coming out, it is EOT, and target is accept prune
if ( PRUNE_TOKENS_RULE_SUPERFLUOUS_EOT_EDGES &&
edgeTarget.isAcceptState() &&
d.getNumberOfTransitions()==1 &&
edge.label.isAtom() &&
edge.label.getAtom()==Label.EOT )
{
//System.out.println("state "+d+" can be pruned");
// remove the superfluous EOT edge
d.removeTransition(i);
d.setAcceptState(true); // make it an accept state
// force it to uniquely predict the originally predicted state
d.cachedUniquelyPredicatedAlt =
edgeTarget.getUniquelyPredictedAlt();
i--; // back up one so that i++ of loop iteration stays within bounds
}
optimizeEOTBranches(edgeTarget);
}
}
/** Walk DFA states, unlinking the nfa configs and whatever else I
* can to reduce memory footprint.
protected void unlinkUnneededStateData(DFAState d) {
Integer sI = Utils.integer(d.stateNumber);
if ( visited.contains(sI) ) {
return; // already visited
}
visited.add(sI);
d.nfaConfigurations = null;
for (int i = 0; i < d.getNumberOfTransitions(); i++) {
Transition edge = (Transition) d.transition(i);
DFAState edgeTarget = ((DFAState)edge.target);
unlinkUnneededStateData(edgeTarget);
}
}
*/
}

View File

@ -0,0 +1,776 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.misc.IntSet;
import org.antlr.misc.MultiMap;
import org.antlr.misc.OrderedHashSet;
import org.antlr.misc.Utils;
import org.antlr.tool.Grammar;
import java.util.*;
/** A DFA state represents a set of possible NFA configurations.
* As Aho, Sethi, Ullman p. 117 says "The DFA uses its state
* to keep track of all possible states the NFA can be in after
* reading each input symbol. That is to say, after reading
* input a1a2..an, the DFA is in a state that represents the
* subset T of the states of the NFA that are reachable from the
* NFA's start state along some path labeled a1a2..an."
* In conventional NFA->DFA conversion, therefore, the subset T
* would be a bitset representing the set of states the
* NFA could be in. We need to track the alt predicted by each
* state as well, however. More importantly, we need to maintain
* a stack of states, tracking the closure operations as they
* jump from rule to rule, emulating rule invocations (method calls).
* Recall that NFAs do not normally have a stack like a pushdown-machine
* so I have to add one to simulate the proper lookahead sequences for
* the underlying LL grammar from which the NFA was derived.
*
* I use a list of NFAConfiguration objects. An NFAConfiguration
* is both a state (ala normal conversion) and an NFAContext describing
* the chain of rules (if any) followed to arrive at that state. There
* is also the semantic context, which is the "set" of predicates found
* on the path to this configuration.
*
* A DFA state may have multiple references to a particular state,
* but with different NFAContexts (with same or different alts)
* meaning that state was reached via a different set of rule invocations.
*/
public class DFAState extends State {
public static final int INITIAL_NUM_TRANSITIONS = 4;
public static final int PREDICTED_ALT_UNSET = NFA.INVALID_ALT_NUMBER-1;
/** We are part of what DFA? Use this ref to get access to the
* context trees for an alt.
*/
public DFA dfa;
/** Track the transitions emanating from this DFA state. The List
* elements are Transition objects.
*/
protected List<Transition> transitions =
new ArrayList<Transition>(INITIAL_NUM_TRANSITIONS);
/** When doing an acyclic DFA, this is the number of lookahead symbols
* consumed to reach this state. This value may be nonzero for most
* dfa states, but it is only a valid value if the user has specified
* a max fixed lookahead.
*/
protected int k;
/** The NFA->DFA algorithm may terminate leaving some states
* without a path to an accept state, implying that upon certain
* input, the decision is not deterministic--no decision about
* predicting a unique alternative can be made. Recall that an
* accept state is one in which a unique alternative is predicted.
*/
protected int acceptStateReachable = DFA.REACHABLE_UNKNOWN;
/** Rather than recheck every NFA configuration in a DFA state (after
* resolving) in findNewDFAStatesAndAddDFATransitions just check
* this boolean. Saves a linear walk perhaps DFA state creation.
* Every little bit helps.
*/
protected boolean resolvedWithPredicates = false;
/** If a closure operation finds that we tried to invoke the same
* rule too many times (stack would grow beyond a threshold), it
* marks the state has aborted and notifies the DecisionProbe.
*/
public boolean abortedDueToRecursionOverflow = false;
/** If we detect recursion on more than one alt, decision is non-LL(*),
* but try to isolate it to only those states whose closure operations
* detect recursion. There may be other alts that are cool:
*
* a : recur '.'
* | recur ';'
* | X Y // LL(2) decision; don't abort and use k=1 plus backtracking
* | X Z
* ;
*
* 12/13/2007: Actually this has caused problems. If k=*, must terminate
* and throw out entire DFA; retry with k=1. Since recursive, do not
* attempt more closure ops as it may take forever. Exception thrown
* now and we simply report the problem. If synpreds exist, I'll retry
* with k=1.
*/
protected boolean abortedDueToMultipleRecursiveAlts = false;
/** Build up the hash code for this state as NFA configurations
* are added as it's monotonically increasing list of configurations.
*/
protected int cachedHashCode;
protected int cachedUniquelyPredicatedAlt = PREDICTED_ALT_UNSET;
public int minAltInConfigurations=Integer.MAX_VALUE;
public boolean atLeastOneConfigurationHasAPredicate = false;
/** The set of NFA configurations (state,alt,context) for this DFA state */
public OrderedHashSet<NFAConfiguration> nfaConfigurations =
new OrderedHashSet<NFAConfiguration>();
public List<NFAConfiguration> configurationsWithLabeledEdges =
new ArrayList<NFAConfiguration>();
/** Used to prevent the closure operation from looping to itself and
* hence looping forever. Sensitive to the NFA state, the alt, and
* the stack context. This just the nfa config set because we want to
* prevent closures only on states contributed by closure not reach
* operations.
*
* Two configurations identical including semantic context are
* considered the same closure computation. @see NFAToDFAConverter.closureBusy().
*/
protected Set<NFAConfiguration> closureBusy = new HashSet<NFAConfiguration>();
/** As this state is constructed (i.e., as NFA states are added), we
* can easily check for non-epsilon transitions because the only
* transition that could be a valid label is transition(0). When we
* process this node eventually, we'll have to walk all states looking
* for all possible transitions. That is of the order: size(label space)
* times size(nfa states), which can be pretty damn big. It's better
* to simply track possible labels.
*/
protected OrderedHashSet<Label> reachableLabels;
public DFAState(DFA dfa) {
this.dfa = dfa;
}
public void reset() {
//nfaConfigurations = null; // getGatedPredicatesInNFAConfigurations needs
configurationsWithLabeledEdges = null;
closureBusy = null;
reachableLabels = null;
}
public Transition transition(int i) {
return (Transition)transitions.get(i);
}
public int getNumberOfTransitions() {
return transitions.size();
}
public void addTransition(Transition t) {
transitions.add(t);
}
/** Add a transition from this state to target with label. Return
* the transition number from 0..n-1.
*/
public int addTransition(DFAState target, Label label) {
transitions.add( new Transition(label, target) );
return transitions.size()-1;
}
public Transition getTransition(int trans) {
return transitions.get(trans);
}
public void removeTransition(int trans) {
transitions.remove(trans);
}
/** Add an NFA configuration to this DFA node. Add uniquely
* an NFA state/alt/syntactic&semantic context (chain of invoking state(s)
* and semantic predicate contexts).
*
* I don't see how there could be two configurations with same
* state|alt|synCtx and different semantic contexts because the
* semantic contexts are computed along the path to a particular state
* so those two configurations would have to have the same predicate.
* Nonetheless, the addition of configurations is unique on all
* configuration info. I guess I'm saying that syntactic context
* implies semantic context as the latter is computed according to the
* former.
*
* As we add configurations to this DFA state, track the set of all possible
* transition labels so we can simply walk it later rather than doing a
* loop over all possible labels in the NFA.
*/
public void addNFAConfiguration(NFAState state, NFAConfiguration c) {
if ( nfaConfigurations.contains(c) ) {
return;
}
nfaConfigurations.add(c);
// track min alt rather than compute later
if ( c.alt < minAltInConfigurations ) {
minAltInConfigurations = c.alt;
}
if ( c.semanticContext!=SemanticContext.EMPTY_SEMANTIC_CONTEXT ) {
atLeastOneConfigurationHasAPredicate = true;
}
// update hashCode; for some reason using context.hashCode() also
// makes the GC take like 70% of the CPU and is slow!
cachedHashCode += c.state + c.alt;
// update reachableLabels
// We're adding an NFA state; check to see if it has a non-epsilon edge
if ( state.transition[0] != null ) {
Label label = state.transition[0].label;
if ( !(label.isEpsilon()||label.isSemanticPredicate()) ) {
// this NFA state has a non-epsilon edge, track for fast
// walking later when we do reach on this DFA state we're
// building.
configurationsWithLabeledEdges.add(c);
if ( state.transition[1] ==null ) {
// later we can check this to ignore o-A->o states in closure
c.singleAtomTransitionEmanating = true;
}
addReachableLabel(label);
}
}
}
public NFAConfiguration addNFAConfiguration(NFAState state,
int alt,
NFAContext context,
SemanticContext semanticContext)
{
NFAConfiguration c = new NFAConfiguration(state.stateNumber,
alt,
context,
semanticContext);
addNFAConfiguration(state, c);
return c;
}
/** Add label uniquely and disjointly; intersection with
* another set or int/char forces breaking up the set(s).
*
* Example, if reachable list of labels is [a..z, {k,9}, 0..9],
* the disjoint list will be [{a..j,l..z}, k, 9, 0..8].
*
* As we add NFA configurations to a DFA state, we might as well track
* the set of all possible transition labels to make the DFA conversion
* more efficient. W/o the reachable labels, we'd need to check the
* whole vocabulary space (could be 0..\uFFFF)! The problem is that
* labels can be sets, which may overlap with int labels or other sets.
* As we need a deterministic set of transitions from any
* state in the DFA, we must make the reachable labels set disjoint.
* This operation amounts to finding the character classes for this
* DFA state whereas with tools like flex, that need to generate a
* homogeneous DFA, must compute char classes across all states.
* We are going to generate DFAs with heterogeneous states so we
* only care that the set of transitions out of a single state are
* unique. :)
*
* The idea for adding a new set, t, is to look for overlap with the
* elements of existing list s. Upon overlap, replace
* existing set s[i] with two new disjoint sets, s[i]-t and s[i]&t.
* (if s[i]-t is nil, don't add). The remainder is t-s[i], which is
* what you want to add to the set minus what was already there. The
* remainder must then be compared against the i+1..n elements in s
* looking for another collision. Each collision results in a smaller
* and smaller remainder. Stop when you run out of s elements or
* remainder goes to nil. If remainder is non nil when you run out of
* s elements, then add remainder to the end.
*
* Single element labels are treated as sets to make the code uniform.
*/
protected void addReachableLabel(Label label) {
if ( reachableLabels==null ) {
reachableLabels = new OrderedHashSet<Label>();
}
/*
System.out.println("addReachableLabel to state "+dfa.decisionNumber+"."+stateNumber+": "+label.getSet().toString(dfa.nfa.grammar));
System.out.println("start of add to state "+dfa.decisionNumber+"."+stateNumber+": " +
"reachableLabels="+reachableLabels.toString());
*/
if ( reachableLabels.contains(label) ) { // exact label present
return;
}
IntSet t = label.getSet();
IntSet remainder = t; // remainder starts out as whole set to add
int n = reachableLabels.size(); // only look at initial elements
// walk the existing list looking for the collision
for (int i=0; i<n; i++) {
Label rl = reachableLabels.get(i);
/*
System.out.println("comparing ["+i+"]: "+label.toString(dfa.nfa.grammar)+" & "+
rl.toString(dfa.nfa.grammar)+"="+
intersection.toString(dfa.nfa.grammar));
*/
if ( !Label.intersect(label, rl) ) {
continue;
}
//System.out.println(label+" collides with "+rl);
// For any (s_i, t) with s_i&t!=nil replace with (s_i-t, s_i&t)
// (ignoring s_i-t if nil; don't put in list)
// Replace existing s_i with intersection since we
// know that will always be a non nil character class
IntSet s_i = rl.getSet();
IntSet intersection = s_i.and(t);
reachableLabels.set(i, new Label(intersection));
// Compute s_i-t to see what is in current set and not in incoming
IntSet existingMinusNewElements = s_i.subtract(t);
//System.out.println(s_i+"-"+t+"="+existingMinusNewElements);
if ( !existingMinusNewElements.isNil() ) {
// found a new character class, add to the end (doesn't affect
// outer loop duration due to n computation a priori.
Label newLabel = new Label(existingMinusNewElements);
reachableLabels.add(newLabel);
}
/*
System.out.println("after collision, " +
"reachableLabels="+reachableLabels.toString());
*/
// anything left to add to the reachableLabels?
remainder = t.subtract(s_i);
if ( remainder.isNil() ) {
break; // nothing left to add to set. done!
}
t = remainder;
}
if ( !remainder.isNil() ) {
/*
System.out.println("before add remainder to state "+dfa.decisionNumber+"."+stateNumber+": " +
"reachableLabels="+reachableLabels.toString());
System.out.println("remainder state "+dfa.decisionNumber+"."+stateNumber+": "+remainder.toString(dfa.nfa.grammar));
*/
Label newLabel = new Label(remainder);
reachableLabels.add(newLabel);
}
/*
System.out.println("#END of add to state "+dfa.decisionNumber+"."+stateNumber+": " +
"reachableLabels="+reachableLabels.toString());
*/
}
public OrderedHashSet getReachableLabels() {
return reachableLabels;
}
public void setNFAConfigurations(OrderedHashSet<NFAConfiguration> configs) {
this.nfaConfigurations = configs;
}
/** A decent hash for a DFA state is the sum of the NFA state/alt pairs.
* This is used when we add DFAState objects to the DFA.states Map and
* when we compare DFA states. Computed in addNFAConfiguration()
*/
public int hashCode() {
if ( cachedHashCode==0 ) {
// LL(1) algorithm doesn't use NFA configurations, which
// dynamically compute hashcode; must have something; use super
return super.hashCode();
}
return cachedHashCode;
}
/** Two DFAStates are equal if their NFA configuration sets are the
* same. This method is used to see if a DFA state already exists.
*
* Because the number of alternatives and number of NFA configurations are
* finite, there is a finite number of DFA states that can be processed.
* This is necessary to show that the algorithm terminates.
*
* Cannot test the DFA state numbers here because in DFA.addState we need
* to know if any other state exists that has this exact set of NFA
* configurations. The DFAState state number is irrelevant.
*/
public boolean equals(Object o) {
// compare set of NFA configurations in this set with other
DFAState other = (DFAState)o;
return this.nfaConfigurations.equals(other.nfaConfigurations);
}
/** Walk each configuration and if they are all the same alt, return
* that alt else return NFA.INVALID_ALT_NUMBER. Ignore resolved
* configurations, but don't ignore resolveWithPredicate configs
* because this state should not be an accept state. We need to add
* this to the work list and then have semantic predicate edges
* emanating from it.
*/
public int getUniquelyPredictedAlt() {
if ( cachedUniquelyPredicatedAlt!=PREDICTED_ALT_UNSET ) {
return cachedUniquelyPredicatedAlt;
}
int alt = NFA.INVALID_ALT_NUMBER;
int numConfigs = nfaConfigurations.size();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
// ignore anything we resolved; predicates will still result
// in transitions out of this state, so must count those
// configurations; i.e., don't ignore resolveWithPredicate configs
if ( configuration.resolved ) {
continue;
}
if ( alt==NFA.INVALID_ALT_NUMBER ) {
alt = configuration.alt; // found first nonresolved alt
}
else if ( configuration.alt!=alt ) {
return NFA.INVALID_ALT_NUMBER;
}
}
this.cachedUniquelyPredicatedAlt = alt;
return alt;
}
/** Return the uniquely mentioned alt from the NFA configurations;
* Ignore the resolved bit etc... Return INVALID_ALT_NUMBER
* if there is more than one alt mentioned.
*/
public int getUniqueAlt() {
int alt = NFA.INVALID_ALT_NUMBER;
int numConfigs = nfaConfigurations.size();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
if ( alt==NFA.INVALID_ALT_NUMBER ) {
alt = configuration.alt; // found first alt
}
else if ( configuration.alt!=alt ) {
return NFA.INVALID_ALT_NUMBER;
}
}
return alt;
}
/** When more than one alternative can match the same input, the first
* alternative is chosen to resolve the conflict. The other alts
* are "turned off" by setting the "resolved" flag in the NFA
* configurations. Return the set of disabled alternatives. For
*
* a : A | A | A ;
*
* this method returns {2,3} as disabled. This does not mean that
* the alternative is totally unreachable, it just means that for this
* DFA state, that alt is disabled. There may be other accept states
* for that alt.
*/
public Set getDisabledAlternatives() {
Set disabled = new LinkedHashSet();
int numConfigs = nfaConfigurations.size();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
if ( configuration.resolved ) {
disabled.add(Utils.integer(configuration.alt));
}
}
return disabled;
}
protected Set getNonDeterministicAlts() {
int user_k = dfa.getUserMaxLookahead();
if ( user_k>0 && user_k==k ) {
// if fixed lookahead, then more than 1 alt is a nondeterminism
// if we have hit the max lookahead
return getAltSet();
}
else if ( abortedDueToMultipleRecursiveAlts || abortedDueToRecursionOverflow ) {
// if we had to abort for non-LL(*) state assume all alts are a problem
return getAltSet();
}
else {
return getConflictingAlts();
}
}
/** Walk each NFA configuration in this DFA state looking for a conflict
* where (s|i|ctx) and (s|j|ctx) exist, indicating that state s with
* context conflicting ctx predicts alts i and j. Return an Integer set
* of the alternative numbers that conflict. Two contexts conflict if
* they are equal or one is a stack suffix of the other or one is
* the empty context.
*
* Use a hash table to record the lists of configs for each state
* as they are encountered. We need only consider states for which
* there is more than one configuration. The configurations' predicted
* alt must be different or must have different contexts to avoid a
* conflict.
*
* Don't report conflicts for DFA states that have conflicting Tokens
* rule NFA states; they will be resolved in favor of the first rule.
*/
protected Set<Integer> getConflictingAlts() {
// TODO this is called multiple times: cache result?
//System.out.println("getNondetAlts for DFA state "+stateNumber);
Set<Integer> nondeterministicAlts = new HashSet<Integer>();
// If only 1 NFA conf then no way it can be nondeterministic;
// save the overhead. There are many o-a->o NFA transitions
// and so we save a hash map and iterator creation for each
// state.
int numConfigs = nfaConfigurations.size();
if ( numConfigs <=1 ) {
return null;
}
// First get a list of configurations for each state.
// Most of the time, each state will have one associated configuration.
MultiMap<Integer, NFAConfiguration> stateToConfigListMap =
new MultiMap<Integer, NFAConfiguration>();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
Integer stateI = Utils.integer(configuration.state);
stateToConfigListMap.map(stateI, configuration);
}
// potential conflicts are states with > 1 configuration and diff alts
Set states = stateToConfigListMap.keySet();
int numPotentialConflicts = 0;
for (Iterator it = states.iterator(); it.hasNext();) {
Integer stateI = (Integer) it.next();
boolean thisStateHasPotentialProblem = false;
List configsForState = (List)stateToConfigListMap.get(stateI);
int alt=0;
int numConfigsForState = configsForState.size();
for (int i = 0; i < numConfigsForState && numConfigsForState>1 ; i++) {
NFAConfiguration c = (NFAConfiguration) configsForState.get(i);
if ( alt==0 ) {
alt = c.alt;
}
else if ( c.alt!=alt ) {
/*
System.out.println("potential conflict in state "+stateI+
" configs: "+configsForState);
*/
// 11/28/2005: don't report closures that pinch back
// together in Tokens rule. We want to silently resolve
// to the first token definition ala lex/flex by ignoring
// these conflicts.
// Also this ensures that lexers look for more and more
// characters (longest match) before resorting to predicates.
// TestSemanticPredicates.testLexerMatchesLongestThenTestPred()
// for example would terminate at state s1 and test predicate
// meaning input "ab" would test preds to decide what to
// do but it should match rule C w/o testing preds.
if ( dfa.nfa.grammar.type!=Grammar.LEXER ||
!dfa.decisionNFAStartState.enclosingRule.name.equals(Grammar.ARTIFICIAL_TOKENS_RULENAME) )
{
numPotentialConflicts++;
thisStateHasPotentialProblem = true;
}
}
}
if ( !thisStateHasPotentialProblem ) {
// remove NFA state's configurations from
// further checking; no issues with it
// (can't remove as it's concurrent modification; set to null)
stateToConfigListMap.put(stateI, null);
}
}
// a fast check for potential issues; most states have none
if ( numPotentialConflicts==0 ) {
return null;
}
// we have a potential problem, so now go through config lists again
// looking for different alts (only states with potential issues
// are left in the states set). Now we will check context.
// For example, the list of configs for NFA state 3 in some DFA
// state might be:
// [3|2|[28 18 $], 3|1|[28 $], 3|1, 3|2]
// I want to create a map from context to alts looking for overlap:
// [28 18 $] -> 2
// [28 $] -> 1
// [$] -> 1,2
// Indeed a conflict exists as same state 3, same context [$], predicts
// alts 1 and 2.
// walk each state with potential conflicting configurations
for (Iterator it = states.iterator(); it.hasNext();) {
Integer stateI = (Integer) it.next();
List configsForState = (List)stateToConfigListMap.get(stateI);
// compare each configuration pair s, t to ensure:
// s.ctx different than t.ctx if s.alt != t.alt
int numConfigsForState = 0;
if ( configsForState!=null ) {
numConfigsForState = configsForState.size();
}
for (int i = 0; i < numConfigsForState; i++) {
NFAConfiguration s = (NFAConfiguration) configsForState.get(i);
for (int j = i+1; j < numConfigsForState; j++) {
NFAConfiguration t = (NFAConfiguration)configsForState.get(j);
// conflicts means s.ctx==t.ctx or s.ctx is a stack
// suffix of t.ctx or vice versa (if alts differ).
// Also a conflict if s.ctx or t.ctx is empty
if ( s.alt != t.alt && s.context.conflictsWith(t.context) ) {
nondeterministicAlts.add(Utils.integer(s.alt));
nondeterministicAlts.add(Utils.integer(t.alt));
}
}
}
}
if ( nondeterministicAlts.size()==0 ) {
return null;
}
return nondeterministicAlts;
}
/** Get the set of all alts mentioned by all NFA configurations in this
* DFA state.
*/
public Set getAltSet() {
int numConfigs = nfaConfigurations.size();
Set alts = new HashSet();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
alts.add(Utils.integer(configuration.alt));
}
if ( alts.size()==0 ) {
return null;
}
return alts;
}
public Set getGatedSyntacticPredicatesInNFAConfigurations() {
int numConfigs = nfaConfigurations.size();
Set<SemanticContext> synpreds = new HashSet<SemanticContext>();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
SemanticContext gatedPredExpr =
configuration.semanticContext.getGatedPredicateContext();
// if this is a manual syn pred (gated and syn pred), add
if ( gatedPredExpr!=null &&
configuration.semanticContext.isSyntacticPredicate() )
{
synpreds.add(configuration.semanticContext);
}
}
if ( synpreds.size()==0 ) {
return null;
}
return synpreds;
}
/** For gated productions, we need an OR'd list of all predicates for the
* target of an edge so we can gate the edge based upon the predicates
* associated with taking that path (if any).
*
* For syntactic predicates, we only want to generate predicate
* evaluations as it transitions to an accept state; waste to
* do it earlier. So, only add gated preds derived from manually-
* specified syntactic predicates if this is an accept state.
*
* Also, since configurations w/o gated predicates are like true
* gated predicates, finding a configuration whose alt has no gated
* predicate implies we should evaluate the predicate to true. This
* means the whole edge has to be ungated. Consider:
*
* X : ('a' | {p}?=> 'a')
* | 'a' 'b'
* ;
*
* Here, you 'a' gets you from s0 to s1 but you can't test p because
* plain 'a' is ok. It's also ok for starting alt 2. Hence, you can't
* test p. Even on the edge going to accept state for alt 1 of X, you
* can't test p. You can get to the same place with and w/o the context.
* Therefore, it is never ok to test p in this situation.
*
* TODO: cache this as it's called a lot; or at least set bit if >1 present in state
*/
public SemanticContext getGatedPredicatesInNFAConfigurations() {
SemanticContext unionOfPredicatesFromAllAlts = null;
int numConfigs = nfaConfigurations.size();
for (int i = 0; i < numConfigs; i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
SemanticContext gatedPredExpr =
configuration.semanticContext.getGatedPredicateContext();
if ( gatedPredExpr==null ) {
// if we ever find a configuration w/o a gated predicate
// (even if it's a nongated predicate), we cannot gate
// the indident edges.
return null;
}
else if ( acceptState || !configuration.semanticContext.isSyntacticPredicate() ) {
// at this point we have a gated predicate and, due to elseif,
// we know it's an accept and not a syn pred. In this case,
// it's safe to add the gated predicate to the union. We
// only want to add syn preds if it's an accept state. Other
// gated preds can be used with edges leading to accept states.
if ( unionOfPredicatesFromAllAlts==null ) {
unionOfPredicatesFromAllAlts = gatedPredExpr;
}
else {
unionOfPredicatesFromAllAlts =
SemanticContext.or(unionOfPredicatesFromAllAlts,gatedPredExpr);
}
}
}
if ( unionOfPredicatesFromAllAlts instanceof SemanticContext.TruePredicate ) {
return null;
}
return unionOfPredicatesFromAllAlts;
}
/** Is an accept state reachable from this state? */
public int getAcceptStateReachable() {
return acceptStateReachable;
}
public void setAcceptStateReachable(int acceptStateReachable) {
this.acceptStateReachable = acceptStateReachable;
}
public boolean isResolvedWithPredicates() {
return resolvedWithPredicates;
}
/** Print all NFA states plus what alts they predict */
public String toString() {
StringBuffer buf = new StringBuffer();
buf.append(stateNumber+":{");
for (int i = 0; i < nfaConfigurations.size(); i++) {
NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
if ( i>0 ) {
buf.append(", ");
}
buf.append(configuration);
}
buf.append("}");
return buf.toString();
}
public int getLookaheadDepth() {
return k;
}
public void setLookaheadDepth(int k) {
this.k = k;
if ( k > dfa.max_k ) { // track max k for entire DFA
dfa.max_k = k;
}
}
}

View File

@ -0,0 +1,915 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.ErrorManager;
import org.antlr.tool.Grammar;
import org.antlr.tool.GrammarAST;
import org.antlr.tool.ANTLRParser;
import org.antlr.misc.Utils;
import org.antlr.misc.MultiMap;
import java.util.*;
import antlr.Token;
/** Collection of information about what is wrong with a decision as
* discovered while building the DFA predictor.
*
* The information is collected during NFA->DFA conversion and, while
* some of this is available elsewhere, it is nice to have it all tracked
* in one spot so a great error message can be easily had. I also like
* the fact that this object tracks it all for later perusing to make an
* excellent error message instead of lots of imprecise on-the-fly warnings
* (during conversion).
*
* A decision normally only has one problem; e.g., some input sequence
* can be matched by multiple alternatives. Unfortunately, some decisions
* such as
*
* a : ( A | B ) | ( A | B ) | A ;
*
* have multiple problems. So in general, you should approach a decision
* as having multiple flaws each one uniquely identified by a DFAState.
* For example, statesWithSyntacticallyAmbiguousAltsSet tracks the set of
* all DFAStates where ANTLR has discovered a problem. Recall that a decision
* is represented internall with a DFA comprised of multiple states, each of
* which could potentially have problems.
*
* Because of this, you need to iterate over this list of DFA states. You'll
* note that most of the informational methods like
* getSampleNonDeterministicInputSequence() require a DFAState. This state
* will be one of the iterated states from stateToSyntacticallyAmbiguousAltsSet.
*
* This class is not thread safe due to shared use of visited maps etc...
* Only one thread should really need to access one DecisionProbe anyway.
*/
public class DecisionProbe {
public DFA dfa;
/** Track all DFA states with nondeterministic alternatives.
* By reaching the same DFA state, a path through the NFA for some input
* is able to reach the same NFA state by starting at more than one
* alternative's left edge. Though, later, we may find that predicates
* resolve the issue, but track info anyway.
* Note that from the DFA state, you can ask for
* which alts are nondeterministic.
*/
protected Set<DFAState> statesWithSyntacticallyAmbiguousAltsSet = new HashSet<DFAState>();
/** Track just like stateToSyntacticallyAmbiguousAltsMap, but only
* for nondeterminisms that arise in the Tokens rule such as keyword vs
* ID rule. The state maps to the list of Tokens rule alts that are
* in conflict.
*/
protected Map<DFAState, Set<Integer>> stateToSyntacticallyAmbiguousTokensRuleAltsMap =
new HashMap<DFAState, Set<Integer>>();
/** Was a syntactic ambiguity resolved with predicates? Any DFA
* state that predicts more than one alternative, must be resolved
* with predicates or it should be reported to the user.
*/
protected Set<DFAState> statesResolvedWithSemanticPredicatesSet = new HashSet<DFAState>();
/** Track the predicates for each alt per DFA state;
* more than one DFA state might have syntactically ambig alt prediction.
* Maps DFA state to another map, mapping alt number to a
* SemanticContext (pred(s) to execute to resolve syntactic ambiguity).
*/
protected Map<DFAState, Map<Integer,SemanticContext>> stateToAltSetWithSemanticPredicatesMap =
new HashMap<DFAState, Map<Integer,SemanticContext>>();
/** Tracks alts insufficiently covered.
* For example, p1||true gets reduced to true and so leaves
* whole alt uncovered. This maps DFA state to the set of alts
*/
protected Map<DFAState,Map<Integer, Set<Token>>> stateToIncompletelyCoveredAltsMap =
new HashMap<DFAState,Map<Integer, Set<Token>>>();
/** The set of states w/o emanating edges and w/o resolving sem preds. */
protected Set<DFAState> danglingStates = new HashSet<DFAState>();
/** The overall list of alts within the decision that have at least one
* conflicting input sequence.
*/
protected Set<Integer> altsWithProblem = new HashSet<Integer>();
/** If decision with > 1 alt has recursion in > 1 alt, it's nonregular
* lookahead. The decision cannot be made with a DFA.
* the alts are stored in altsWithProblem.
*/
protected boolean nonLLStarDecision = false;
/** Recursion is limited to a particular depth. If that limit is exceeded
* the proposed new NFAConfiguration is recorded for the associated DFA state.
*/
protected MultiMap<Integer, NFAConfiguration> stateToRecursionOverflowConfigurationsMap =
new MultiMap<Integer, NFAConfiguration>();
/*
protected Map<Integer, List<NFAConfiguration>> stateToRecursionOverflowConfigurationsMap =
new HashMap<Integer, List<NFAConfiguration>>();
*/
/** Left recursion discovered. The proposed new NFAConfiguration
* is recorded for the associated DFA state.
protected Map<Integer,List<NFAConfiguration>> stateToLeftRecursiveConfigurationsMap =
new HashMap<Integer,List<NFAConfiguration>>();
*/
/** Did ANTLR have to terminate early on the analysis of this decision? */
protected boolean timedOut = false;
/** Used to find paths through syntactically ambiguous DFA. If we've
* seen statement number before, what did we learn?
*/
protected Map<Integer, Integer> stateReachable;
public static final Integer REACHABLE_BUSY = Utils.integer(-1);
public static final Integer REACHABLE_NO = Utils.integer(0);
public static final Integer REACHABLE_YES = Utils.integer(1);
/** Used while finding a path through an NFA whose edge labels match
* an input sequence. Tracks the input position
* we were at the last time at this node. If same input position, then
* we'd have reached same state without consuming input...probably an
* infinite loop. Stop. Set<String>. The strings look like
* stateNumber_labelIndex.
*/
protected Set<String> statesVisitedAtInputDepth;
protected Set<Integer> statesVisitedDuringSampleSequence;
public static boolean verbose = false;
public DecisionProbe(DFA dfa) {
this.dfa = dfa;
}
// I N F O R M A T I O N A B O U T D E C I S I O N
/** Return a string like "3:22: ( A {;} | B )" that describes this
* decision.
*/
public String getDescription() {
return dfa.getNFADecisionStartState().getDescription();
}
public boolean isReduced() {
return dfa.isReduced();
}
public boolean isCyclic() {
return dfa.isCyclic();
}
/** If no states are dead-ends, no alts are unreachable, there are
* no nondeterminisms unresolved by syn preds, all is ok with decision.
*/
public boolean isDeterministic() {
if ( danglingStates.size()==0 &&
statesWithSyntacticallyAmbiguousAltsSet.size()==0 &&
dfa.getUnreachableAlts().size()==0 )
{
return true;
}
if ( statesWithSyntacticallyAmbiguousAltsSet.size()>0 ) {
Iterator it =
statesWithSyntacticallyAmbiguousAltsSet.iterator();
while ( it.hasNext() ) {
DFAState d = (DFAState) it.next();
if ( !statesResolvedWithSemanticPredicatesSet.contains(d) ) {
return false;
}
}
// no syntactically ambig alts were left unresolved by predicates
return true;
}
return false;
}
/** Did the analysis complete it's work? */
public boolean analysisTimedOut() {
return timedOut;
}
/** Took too long to analyze a DFA */
public boolean analysisOverflowed() {
return stateToRecursionOverflowConfigurationsMap.size()>0;
}
/** Found recursion in > 1 alt */
public boolean isNonLLStarDecision() {
return nonLLStarDecision;
}
/** How many states does the DFA predictor have? */
public int getNumberOfStates() {
return dfa.getNumberOfStates();
}
/** Get a list of all unreachable alternatives for this decision. There
* may be multiple alternatives with ambiguous input sequences, but this
* is the overall list of unreachable alternatives (either due to
* conflict resolution or alts w/o accept states).
*/
public List<Integer> getUnreachableAlts() {
return dfa.getUnreachableAlts();
}
/** return set of states w/o emanating edges and w/o resolving sem preds.
* These states come about because the analysis algorithm had to
* terminate early to avoid infinite recursion for example (due to
* left recursion perhaps).
*/
public Set getDanglingStates() {
return danglingStates;
}
public Set getNonDeterministicAlts() {
return altsWithProblem;
}
/** Return the sorted list of alts that conflict within a single state.
* Note that predicates may resolve the conflict.
*/
public List getNonDeterministicAltsForState(DFAState targetState) {
Set nondetAlts = targetState.getNonDeterministicAlts();
if ( nondetAlts==null ) {
return null;
}
List sorted = new LinkedList();
sorted.addAll(nondetAlts);
Collections.sort(sorted); // make sure it's 1, 2, ...
return sorted;
}
/** Return all DFA states in this DFA that have NFA configurations that
* conflict. You must report a problem for each state in this set
* because each state represents a different input sequence.
*/
public Set getDFAStatesWithSyntacticallyAmbiguousAlts() {
return statesWithSyntacticallyAmbiguousAltsSet;
}
/** Which alts were specifically turned off to resolve nondeterminisms?
* This is different than the unreachable alts. Disabled doesn't mean that
* the alternative is totally unreachable necessarily, it just means
* that for this DFA state, that alt is disabled. There may be other
* accept states for that alt that make an alt reachable.
*/
public Set getDisabledAlternatives(DFAState d) {
return d.getDisabledAlternatives();
}
/** If a recursion overflow is resolve with predicates, then we need
* to shut off the warning that would be generated.
*/
public void removeRecursiveOverflowState(DFAState d) {
Integer stateI = Utils.integer(d.stateNumber);
stateToRecursionOverflowConfigurationsMap.remove(stateI);
}
/** Return a List<Label> indicating an input sequence that can be matched
* from the start state of the DFA to the targetState (which is known
* to have a problem).
*/
public List<Label> getSampleNonDeterministicInputSequence(DFAState targetState) {
Set dfaStates = getDFAPathStatesToTarget(targetState);
statesVisitedDuringSampleSequence = new HashSet<Integer>();
List<Label> labels = new ArrayList<Label>(); // may access ith element; use array
if ( dfa==null || dfa.startState==null ) {
return labels;
}
getSampleInputSequenceUsingStateSet(dfa.startState,
targetState,
dfaStates,
labels);
return labels;
}
/** Given List<Label>, return a String with a useful representation
* of the associated input string. One could show something different
* for lexers and parsers, for example.
*/
public String getInputSequenceDisplay(List labels) {
Grammar g = dfa.nfa.grammar;
StringBuffer buf = new StringBuffer();
for (Iterator it = labels.iterator(); it.hasNext();) {
Label label = (Label) it.next();
buf.append(label.toString(g));
if ( it.hasNext() && g.type!=Grammar.LEXER ) {
buf.append(' ');
}
}
return buf.toString();
}
/** Given an alternative associated with a nondeterministic DFA state,
* find the path of NFA states associated with the labels sequence.
* Useful tracing where in the NFA, a single input sequence can be
* matched. For different alts, you should get different NFA paths.
*
* The first NFA state for all NFA paths will be the same: the starting
* NFA state of the first nondeterministic alt. Imagine (A|B|A|A):
*
* 5->9-A->o
* |
* 6->10-B->o
* |
* 7->11-A->o
* |
* 8->12-A->o
*
* There are 3 nondeterministic alts. The paths should be:
* 5 9 ...
* 5 6 7 11 ...
* 5 6 7 8 12 ...
*
* The NFA path matching the sample input sequence (labels) is computed
* using states 9, 11, and 12 rather than 5, 7, 8 because state 5, for
* example can get to all ambig paths. Must isolate for each alt (hence,
* the extra state beginning each alt in my NFA structures). Here,
* firstAlt=1.
*/
public List getNFAPathStatesForAlt(int firstAlt,
int alt,
List labels)
{
NFAState nfaStart = dfa.getNFADecisionStartState();
List path = new LinkedList();
// first add all NFA states leading up to altStart state
for (int a=firstAlt; a<=alt; a++) {
NFAState s =
dfa.nfa.grammar.getNFAStateForAltOfDecision(nfaStart,a);
path.add(s);
}
// add first state of actual alt
NFAState altStart = dfa.nfa.grammar.getNFAStateForAltOfDecision(nfaStart,alt);
NFAState isolatedAltStart = (NFAState)altStart.transition[0].target;
path.add(isolatedAltStart);
// add the actual path now
statesVisitedAtInputDepth = new HashSet();
getNFAPath(isolatedAltStart,
0,
labels,
path);
return path;
}
/** Each state in the DFA represents a different input sequence for an
* alt of the decision. Given a DFA state, what is the semantic
* predicate context for a particular alt.
*/
public SemanticContext getSemanticContextForAlt(DFAState d, int alt) {
Map altToPredMap = (Map)stateToAltSetWithSemanticPredicatesMap.get(d);
if ( altToPredMap==null ) {
return null;
}
return (SemanticContext)altToPredMap.get(Utils.integer(alt));
}
/** At least one alt refs a sem or syn pred */
public boolean hasPredicate() {
return stateToAltSetWithSemanticPredicatesMap.size()>0;
}
public Set getNondeterministicStatesResolvedWithSemanticPredicate() {
return statesResolvedWithSemanticPredicatesSet;
}
/** Return a list of alts whose predicate context was insufficient to
* resolve a nondeterminism for state d.
*/
public Map<Integer, Set<Token>> getIncompletelyCoveredAlts(DFAState d) {
return stateToIncompletelyCoveredAltsMap.get(d);
}
public void issueWarnings() {
// NONREGULAR DUE TO RECURSION > 1 ALTS
// Issue this before aborted analysis, which might also occur
// if we take too long to terminate
if ( nonLLStarDecision && !dfa.getAutoBacktrackMode() ) {
ErrorManager.nonLLStarDecision(this);
}
if ( analysisTimedOut() ) {
// only report early termination errors if !backtracking
if ( !dfa.getAutoBacktrackMode() ) {
ErrorManager.analysisAborted(this);
}
// now just return...if we bailed out, don't spew other messages
return;
}
issueRecursionWarnings();
// generate a separate message for each problem state in DFA
Set resolvedStates = getNondeterministicStatesResolvedWithSemanticPredicate();
Set problemStates = getDFAStatesWithSyntacticallyAmbiguousAlts();
if ( problemStates.size()>0 ) {
Iterator it =
problemStates.iterator();
while ( it.hasNext() && !dfa.nfa.grammar.NFAToDFAConversionExternallyAborted() ) {
DFAState d = (DFAState) it.next();
Map<Integer, Set<Token>> insufficientAltToLocations = getIncompletelyCoveredAlts(d);
if ( insufficientAltToLocations!=null && insufficientAltToLocations.size()>0 ) {
ErrorManager.insufficientPredicates(this,d,insufficientAltToLocations);
}
// don't report problem if resolved
if ( resolvedStates==null || !resolvedStates.contains(d) ) {
// first strip last alt from disableAlts if it's wildcard
// then don't print error if no more disable alts
Set disabledAlts = getDisabledAlternatives(d);
stripWildCardAlts(disabledAlts);
if ( disabledAlts.size()>0 ) {
ErrorManager.nondeterminism(this,d);
}
}
}
}
Set danglingStates = getDanglingStates();
if ( danglingStates.size()>0 ) {
//System.err.println("no emanating edges for states: "+danglingStates);
for (Iterator it = danglingStates.iterator(); it.hasNext();) {
DFAState d = (DFAState) it.next();
ErrorManager.danglingState(this,d);
}
}
if ( !nonLLStarDecision ) {
List<Integer> unreachableAlts = dfa.getUnreachableAlts();
if ( unreachableAlts!=null && unreachableAlts.size()>0 ) {
// give different msg if it's an empty Tokens rule from delegate
boolean isInheritedTokensRule = false;
if ( dfa.isTokensRuleDecision() ) {
for (Integer altI : unreachableAlts) {
GrammarAST decAST = dfa.getDecisionASTNode();
GrammarAST altAST = decAST.getChild(altI-1);
GrammarAST delegatedTokensAlt =
altAST.getFirstChildWithType(ANTLRParser.DOT);
if ( delegatedTokensAlt !=null ) {
isInheritedTokensRule = true;
ErrorManager.grammarWarning(ErrorManager.MSG_IMPORTED_TOKENS_RULE_EMPTY,
dfa.nfa.grammar,
null,
dfa.nfa.grammar.name,
delegatedTokensAlt.getFirstChild().getText());
}
}
}
if ( isInheritedTokensRule ) {
}
else {
ErrorManager.unreachableAlts(this,unreachableAlts);
}
}
}
}
/** Get the last disabled alt number and check in the grammar to see
* if that alt is a simple wildcard. If so, treat like an else clause
* and don't emit the error. Strip out the last alt if it's wildcard.
*/
protected void stripWildCardAlts(Set disabledAlts) {
List sortedDisableAlts = new ArrayList(disabledAlts);
Collections.sort(sortedDisableAlts);
Integer lastAlt =
(Integer)sortedDisableAlts.get(sortedDisableAlts.size()-1);
GrammarAST blockAST =
dfa.nfa.grammar.getDecisionBlockAST(dfa.decisionNumber);
//System.out.println("block with error = "+blockAST.toStringTree());
GrammarAST lastAltAST = null;
if ( blockAST.getChild(0).getType()==ANTLRParser.OPTIONS ) {
// if options, skip first child: ( options { ( = greedy false ) )
lastAltAST = blockAST.getChild(lastAlt.intValue());
}
else {
lastAltAST = blockAST.getChild(lastAlt.intValue()-1);
}
//System.out.println("last alt is "+lastAltAST.toStringTree());
// if last alt looks like ( ALT . <end-of-alt> ) then wildcard
// Avoid looking at optional blocks etc... that have last alt
// as the EOB:
// ( BLOCK ( ALT 'else' statement <end-of-alt> ) <end-of-block> )
if ( lastAltAST.getType()!=ANTLRParser.EOB &&
lastAltAST.getChild(0).getType()== ANTLRParser.WILDCARD &&
lastAltAST.getChild(1).getType()== ANTLRParser.EOA )
{
//System.out.println("wildcard");
disabledAlts.remove(lastAlt);
}
}
protected void issueRecursionWarnings() {
// RECURSION OVERFLOW
Set dfaStatesWithRecursionProblems =
stateToRecursionOverflowConfigurationsMap.keySet();
// now walk truly unique (unaliased) list of dfa states with inf recur
// Goal: create a map from alt to map<target,List<callsites>>
// Map<Map<String target, List<NFAState call sites>>
Map altToTargetToCallSitesMap = new HashMap();
// track a single problem DFA state for each alt
Map altToDFAState = new HashMap();
computeAltToProblemMaps(dfaStatesWithRecursionProblems,
stateToRecursionOverflowConfigurationsMap,
altToTargetToCallSitesMap, // output param
altToDFAState); // output param
// walk each alt with recursion overflow problems and generate error
Set alts = altToTargetToCallSitesMap.keySet();
List sortedAlts = new ArrayList(alts);
Collections.sort(sortedAlts);
for (Iterator altsIt = sortedAlts.iterator(); altsIt.hasNext();) {
Integer altI = (Integer) altsIt.next();
Map targetToCallSiteMap =
(Map)altToTargetToCallSitesMap.get(altI);
Set targetRules = targetToCallSiteMap.keySet();
Collection callSiteStates = targetToCallSiteMap.values();
DFAState sampleBadState = (DFAState)altToDFAState.get(altI);
ErrorManager.recursionOverflow(this,
sampleBadState,
altI.intValue(),
targetRules,
callSiteStates);
}
}
private void computeAltToProblemMaps(Set dfaStatesUnaliased,
Map configurationsMap,
Map altToTargetToCallSitesMap,
Map altToDFAState)
{
for (Iterator it = dfaStatesUnaliased.iterator(); it.hasNext();) {
Integer stateI = (Integer) it.next();
// walk this DFA's config list
List configs = (List)configurationsMap.get(stateI);
for (int i = 0; i < configs.size(); i++) {
NFAConfiguration c = (NFAConfiguration) configs.get(i);
NFAState ruleInvocationState = dfa.nfa.getState(c.state);
Transition transition0 = ruleInvocationState.transition[0];
RuleClosureTransition ref = (RuleClosureTransition)transition0;
String targetRule = ((NFAState) ref.target).enclosingRule.name;
Integer altI = Utils.integer(c.alt);
Map targetToCallSiteMap =
(Map)altToTargetToCallSitesMap.get(altI);
if ( targetToCallSiteMap==null ) {
targetToCallSiteMap = new HashMap();
altToTargetToCallSitesMap.put(altI, targetToCallSiteMap);
}
Set callSites =
(HashSet)targetToCallSiteMap.get(targetRule);
if ( callSites==null ) {
callSites = new HashSet();
targetToCallSiteMap.put(targetRule, callSites);
}
callSites.add(ruleInvocationState);
// track one problem DFA state per alt
if ( altToDFAState.get(altI)==null ) {
DFAState sampleBadState = dfa.getState(stateI.intValue());
altToDFAState.put(altI, sampleBadState);
}
}
}
}
private Set getUnaliasedDFAStateSet(Set dfaStatesWithRecursionProblems) {
Set dfaStatesUnaliased = new HashSet();
for (Iterator it = dfaStatesWithRecursionProblems.iterator(); it.hasNext();) {
Integer stateI = (Integer) it.next();
DFAState d = dfa.getState(stateI.intValue());
dfaStatesUnaliased.add(Utils.integer(d.stateNumber));
}
return dfaStatesUnaliased;
}
// T R A C K I N G M E T H O D S
/** Report the fact that DFA state d is not a state resolved with
* predicates and yet it has no emanating edges. Usually this
* is a result of the closure/reach operations being unable to proceed
*/
public void reportDanglingState(DFAState d) {
danglingStates.add(d);
}
public void reportAnalysisTimeout() {
timedOut = true;
dfa.nfa.grammar.setOfDFAWhoseAnalysisTimedOut.add(dfa);
}
/** Report that at least 2 alts have recursive constructs. There is
* no way to build a DFA so we terminated.
*/
public void reportNonLLStarDecision(DFA dfa) {
/*
System.out.println("non-LL(*) DFA "+dfa.decisionNumber+", alts: "+
dfa.recursiveAltSet.toList());
*/
nonLLStarDecision = true;
altsWithProblem.addAll(dfa.recursiveAltSet.toList());
}
public void reportRecursionOverflow(DFAState d,
NFAConfiguration recursionNFAConfiguration)
{
// track the state number rather than the state as d will change
// out from underneath us; hash wouldn't return any value
// left-recursion is detected in start state. Since we can't
// call resolveNondeterminism() on the start state (it would
// not look k=1 to get min single token lookahead), we must
// prevent errors derived from this state. Avoid start state
if ( d.stateNumber > 0 ) {
Integer stateI = Utils.integer(d.stateNumber);
stateToRecursionOverflowConfigurationsMap.map(stateI, recursionNFAConfiguration);
}
}
public void reportNondeterminism(DFAState d, Set<Integer> nondeterministicAlts) {
altsWithProblem.addAll(nondeterministicAlts); // track overall list
statesWithSyntacticallyAmbiguousAltsSet.add(d);
dfa.nfa.grammar.setOfNondeterministicDecisionNumbers.add(
Utils.integer(dfa.getDecisionNumber())
);
}
/** Currently the analysis reports issues between token definitions, but
* we don't print out warnings in favor of just picking the first token
* definition found in the grammar ala lex/flex.
*/
public void reportLexerRuleNondeterminism(DFAState d, Set<Integer> nondeterministicAlts) {
stateToSyntacticallyAmbiguousTokensRuleAltsMap.put(d,nondeterministicAlts);
}
public void reportNondeterminismResolvedWithSemanticPredicate(DFAState d) {
// First, prevent a recursion warning on this state due to
// pred resolution
if ( d.abortedDueToRecursionOverflow ) {
d.dfa.probe.removeRecursiveOverflowState(d);
}
statesResolvedWithSemanticPredicatesSet.add(d);
//System.out.println("resolved with pred: "+d);
dfa.nfa.grammar.setOfNondeterministicDecisionNumbersResolvedWithPredicates.add(
Utils.integer(dfa.getDecisionNumber())
);
}
/** Report the list of predicates found for each alternative; copy
* the list because this set gets altered later by the method
* tryToResolveWithSemanticPredicates() while flagging NFA configurations
* in d as resolved.
*/
public void reportAltPredicateContext(DFAState d, Map altPredicateContext) {
Map copy = new HashMap();
copy.putAll(altPredicateContext);
stateToAltSetWithSemanticPredicatesMap.put(d,copy);
}
public void reportIncompletelyCoveredAlts(DFAState d,
Map<Integer, Set<Token>> altToLocationsReachableWithoutPredicate)
{
stateToIncompletelyCoveredAltsMap.put(d, altToLocationsReachableWithoutPredicate);
}
// S U P P O R T
/** Given a start state and a target state, return true if start can reach
* target state. Also, compute the set of DFA states
* that are on a path from start to target; return in states parameter.
*/
protected boolean reachesState(DFAState startState,
DFAState targetState,
Set states) {
if ( startState==targetState ) {
states.add(targetState);
//System.out.println("found target DFA state "+targetState.getStateNumber());
stateReachable.put(startState.stateNumber, REACHABLE_YES);
return true;
}
DFAState s = startState;
// avoid infinite loops
stateReachable.put(s.stateNumber, REACHABLE_BUSY);
// look for a path to targetState among transitions for this state
// stop when you find the first one; I'm pretty sure there is
// at most one path to any DFA state with conflicting predictions
for (int i=0; i<s.getNumberOfTransitions(); i++) {
Transition t = s.transition(i);
DFAState edgeTarget = (DFAState)t.target;
Integer targetStatus = stateReachable.get(edgeTarget.stateNumber);
if ( targetStatus==REACHABLE_BUSY ) { // avoid cycles; they say nothing
continue;
}
if ( targetStatus==REACHABLE_YES ) { // return success!
stateReachable.put(s.stateNumber, REACHABLE_YES);
return true;
}
if ( targetStatus==REACHABLE_NO ) { // try another transition
continue;
}
// if null, target must be REACHABLE_UNKNOWN (i.e., unvisited)
if ( reachesState(edgeTarget, targetState, states) ) {
states.add(s);
stateReachable.put(s.stateNumber, REACHABLE_YES);
return true;
}
}
stateReachable.put(s.stateNumber, REACHABLE_NO);
return false; // no path to targetState found.
}
protected Set getDFAPathStatesToTarget(DFAState targetState) {
Set dfaStates = new HashSet();
stateReachable = new HashMap();
if ( dfa==null || dfa.startState==null ) {
return dfaStates;
}
boolean reaches = reachesState(dfa.startState, targetState, dfaStates);
return dfaStates;
}
/** Given a start state and a final state, find a list of edge labels
* between the two ignoring epsilon. Limit your scan to a set of states
* passed in. This is used to show a sample input sequence that is
* nondeterministic with respect to this decision. Return List<Label> as
* a parameter. The incoming states set must be all states that lead
* from startState to targetState and no others so this algorithm doesn't
* take a path that eventually leads to a state other than targetState.
* Don't follow loops, leading to short (possibly shortest) path.
*/
protected void getSampleInputSequenceUsingStateSet(State startState,
State targetState,
Set states,
List<Label> labels)
{
statesVisitedDuringSampleSequence.add(startState.stateNumber);
// pick the first edge in states as the one to traverse
for (int i=0; i<startState.getNumberOfTransitions(); i++) {
Transition t = startState.transition(i);
DFAState edgeTarget = (DFAState)t.target;
if ( states.contains(edgeTarget) &&
!statesVisitedDuringSampleSequence.contains(edgeTarget.stateNumber) )
{
labels.add(t.label); // traverse edge and track label
if ( edgeTarget!=targetState ) {
// get more labels if not at target
getSampleInputSequenceUsingStateSet(edgeTarget,
targetState,
states,
labels);
}
// done with this DFA state as we've found a good path to target
return;
}
}
labels.add(new Label(Label.EPSILON)); // indicate no input found
// this happens on a : {p1}? a | A ;
//ErrorManager.error(ErrorManager.MSG_CANNOT_COMPUTE_SAMPLE_INPUT_SEQ);
}
/** Given a sample input sequence, you usually would like to know the
* path taken through the NFA. Return the list of NFA states visited
* while matching a list of labels. This cannot use the usual
* interpreter, which does a deterministic walk. We need to be able to
* take paths that are turned off during nondeterminism resolution. So,
* just do a depth-first walk traversing edges labeled with the current
* label. Return true if a path was found emanating from state s.
*/
protected boolean getNFAPath(NFAState s, // starting where?
int labelIndex, // 0..labels.size()-1
List labels, // input sequence
List path) // output list of NFA states
{
// track a visit to state s at input index labelIndex if not seen
String thisStateKey = getStateLabelIndexKey(s.stateNumber,labelIndex);
if ( statesVisitedAtInputDepth.contains(thisStateKey) ) {
/*
System.out.println("### already visited "+s.stateNumber+" previously at index "+
labelIndex);
*/
return false;
}
statesVisitedAtInputDepth.add(thisStateKey);
/*
System.out.println("enter state "+s.stateNumber+" visited states: "+
statesVisitedAtInputDepth);
*/
// pick the first edge whose target is in states and whose
// label is labels[labelIndex]
for (int i=0; i<s.getNumberOfTransitions(); i++) {
Transition t = s.transition[i];
NFAState edgeTarget = (NFAState)t.target;
Label label = (Label)labels.get(labelIndex);
/*
System.out.println(s.stateNumber+"-"+
t.label.toString(dfa.nfa.grammar)+"->"+
edgeTarget.stateNumber+" =="+
label.toString(dfa.nfa.grammar)+"?");
*/
if ( t.label.isEpsilon() || t.label.isSemanticPredicate() ) {
// nondeterministically backtrack down epsilon edges
path.add(edgeTarget);
boolean found =
getNFAPath(edgeTarget, labelIndex, labels, path);
if ( found ) {
statesVisitedAtInputDepth.remove(thisStateKey);
return true; // return to "calling" state
}
path.remove(path.size()-1); // remove; didn't work out
continue; // look at the next edge
}
if ( t.label.matches(label) ) {
path.add(edgeTarget);
/*
System.out.println("found label "+
t.label.toString(dfa.nfa.grammar)+
" at state "+s.stateNumber+"; labelIndex="+labelIndex);
*/
if ( labelIndex==labels.size()-1 ) {
// found last label; done!
statesVisitedAtInputDepth.remove(thisStateKey);
return true;
}
// otherwise try to match remaining input
boolean found =
getNFAPath(edgeTarget, labelIndex+1, labels, path);
if ( found ) {
statesVisitedAtInputDepth.remove(thisStateKey);
return true;
}
/*
System.out.println("backtrack; path from "+s.stateNumber+"->"+
t.label.toString(dfa.nfa.grammar)+" didn't work");
*/
path.remove(path.size()-1); // remove; didn't work out
continue; // keep looking for a path for labels
}
}
//System.out.println("no epsilon or matching edge; removing "+thisStateKey);
// no edge was found matching label; is ok, some state will have it
statesVisitedAtInputDepth.remove(thisStateKey);
return false;
}
protected String getStateLabelIndexKey(int s, int i) {
StringBuffer buf = new StringBuffer();
buf.append(s);
buf.append('_');
buf.append(i);
return buf.toString();
}
/** From an alt number associated with artificial Tokens rule, return
* the name of the token that is associated with that alt.
*/
public String getTokenNameForTokensRuleAlt(int alt) {
NFAState decisionState = dfa.getNFADecisionStartState();
NFAState altState =
dfa.nfa.grammar.getNFAStateForAltOfDecision(decisionState,alt);
NFAState decisionLeft = (NFAState)altState.transition[0].target;
RuleClosureTransition ruleCallEdge =
(RuleClosureTransition)decisionLeft.transition[0];
NFAState ruleStartState = (NFAState)ruleCallEdge.target;
//System.out.println("alt = "+decisionLeft.getEnclosingRule());
return ruleStartState.enclosingRule.name;
}
public void reset() {
stateToRecursionOverflowConfigurationsMap.clear();
}
}

View File

@ -0,0 +1,444 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.Rule;
import org.antlr.tool.ANTLRParser;
import org.antlr.tool.Grammar;
import org.antlr.misc.IntervalSet;
import org.antlr.misc.IntSet;
import java.util.*;
/**
* Created by IntelliJ IDEA.
* User: parrt
* Date: Dec 31, 2007
* Time: 1:31:16 PM
* To change this template use File | Settings | File Templates.
*/
public class LL1Analyzer {
/** 0 if we hit end of rule and invoker should keep going (epsilon) */
public static final int DETECT_PRED_EOR = 0;
/** 1 if we found a nonautobacktracking pred */
public static final int DETECT_PRED_FOUND = 1;
/** 2 if we didn't find such a pred */
public static final int DETECT_PRED_NOT_FOUND = 2;
public Grammar grammar;
/** Used during LOOK to detect computation cycles */
protected Set<NFAState> lookBusy = new HashSet<NFAState>();
public Map<NFAState, LookaheadSet> FIRSTCache = new HashMap<NFAState, LookaheadSet>();
public Map<Rule, LookaheadSet> FOLLOWCache = new HashMap<Rule, LookaheadSet>();
public LL1Analyzer(Grammar grammar) {
this.grammar = grammar;
}
/*
public void computeRuleFIRSTSets() {
if ( getNumberOfDecisions()==0 ) {
createNFAs();
}
for (Iterator it = getRules().iterator(); it.hasNext();) {
Rule r = (Rule)it.next();
if ( r.isSynPred ) {
continue;
}
LookaheadSet s = FIRST(r);
System.out.println("FIRST("+r.name+")="+s);
}
}
*/
/*
public Set<String> getOverriddenRulesWithDifferentFIRST() {
// walk every rule in this grammar and compare FIRST set with
// those in imported grammars.
Set<String> rules = new HashSet();
for (Iterator it = getRules().iterator(); it.hasNext();) {
Rule r = (Rule)it.next();
//System.out.println(r.name+" FIRST="+r.FIRST);
for (int i = 0; i < delegates.size(); i++) {
Grammar g = delegates.get(i);
Rule importedRule = g.getRule(r.name);
if ( importedRule != null ) { // exists in imported grammar
// System.out.println(r.name+" exists in imported grammar: FIRST="+importedRule.FIRST);
if ( !r.FIRST.equals(importedRule.FIRST) ) {
rules.add(r.name);
}
}
}
}
return rules;
}
public Set<Rule> getImportedRulesSensitiveToOverriddenRulesDueToLOOK() {
Set<String> diffFIRSTs = getOverriddenRulesWithDifferentFIRST();
Set<Rule> rules = new HashSet();
for (Iterator it = diffFIRSTs.iterator(); it.hasNext();) {
String r = (String) it.next();
for (int i = 0; i < delegates.size(); i++) {
Grammar g = delegates.get(i);
Set<Rule> callers = g.ruleSensitivity.get(r);
// somebody invokes rule whose FIRST changed in subgrammar?
if ( callers!=null ) {
rules.addAll(callers);
//System.out.println(g.name+" rules "+callers+" sensitive to "+r+"; dup 'em");
}
}
}
return rules;
}
*/
/*
public LookaheadSet LOOK(Rule r) {
if ( r.FIRST==null ) {
r.FIRST = FIRST(r.startState);
}
return r.FIRST;
}
*/
/** From an NFA state, s, find the set of all labels reachable from s.
* Used to compute follow sets for error recovery. Never computes
* a FOLLOW operation. FIRST stops at end of rules, returning EOR, unless
* invoked from another rule. I.e., routine properly handles
*
* a : b A ;
*
* where b is nullable.
*
* We record with EOR_TOKEN_TYPE if we hit the end of a rule so we can
* know at runtime (when these sets are used) to start walking up the
* follow chain to compute the real, correct follow set (as opposed to
* the FOLLOW, which is a superset).
*
* This routine will only be used on parser and tree parser grammars.
*/
public LookaheadSet FIRST(NFAState s) {
//System.out.println("> FIRST("+s+") in rule "+s.enclosingRule);
lookBusy.clear();
LookaheadSet look = _FIRST(s, false);
//System.out.println("< FIRST("+s+") in rule "+s.enclosingRule+"="+look.toString(this));
return look;
}
public LookaheadSet FOLLOW(Rule r) {
LookaheadSet f = FOLLOWCache.get(r);
if ( f!=null ) {
return f;
}
f = _FIRST(r.stopState, true);
FOLLOWCache.put(r, f);
return f;
}
public LookaheadSet LOOK(NFAState s) {
if ( NFAToDFAConverter.debug ) {
System.out.println("> LOOK("+s+")");
}
lookBusy.clear();
LookaheadSet look = _FIRST(s, true);
// FOLLOW makes no sense (at the moment!) for lexical rules.
if ( grammar.type!=Grammar.LEXER && look.member(Label.EOR_TOKEN_TYPE) ) {
// avoid altering FIRST reset as it is cached
LookaheadSet f = FOLLOW(s.enclosingRule);
f.orInPlace(look);
f.remove(Label.EOR_TOKEN_TYPE);
look = f;
//look.orInPlace(FOLLOW(s.enclosingRule));
}
else if ( grammar.type==Grammar.LEXER && look.member(Label.EOT) ) {
// if this has EOT, lookahead is all char (all char can follow rule)
//look = new LookaheadSet(Label.EOT);
look = new LookaheadSet(IntervalSet.COMPLETE_SET);
}
if ( NFAToDFAConverter.debug ) {
System.out.println("< LOOK("+s+")="+look.toString(grammar));
}
return look;
}
protected LookaheadSet _FIRST(NFAState s, boolean chaseFollowTransitions) {
//System.out.println("_LOOK("+s+") in rule "+s.enclosingRule);
/*
if ( s.transition[0] instanceof RuleClosureTransition ) {
System.out.println("go to rule "+((NFAState)s.transition[0].target).enclosingRule);
}
*/
if ( !chaseFollowTransitions && s.isAcceptState() ) {
if ( grammar.type==Grammar.LEXER ) {
// FOLLOW makes no sense (at the moment!) for lexical rules.
// assume all char can follow
return new LookaheadSet(IntervalSet.COMPLETE_SET);
}
return new LookaheadSet(Label.EOR_TOKEN_TYPE);
}
if ( lookBusy.contains(s) ) {
// return a copy of an empty set; we may modify set inline
return new LookaheadSet();
}
lookBusy.add(s);
Transition transition0 = s.transition[0];
if ( transition0==null ) {
return null;
}
if ( transition0.label.isAtom() ) {
int atom = transition0.label.getAtom();
return new LookaheadSet(atom);
}
if ( transition0.label.isSet() ) {
IntSet sl = transition0.label.getSet();
return new LookaheadSet(sl);
}
// compute FIRST of transition 0
LookaheadSet tset = null;
// if transition 0 is a rule call and we don't want FOLLOW, check cache
if ( !chaseFollowTransitions && transition0 instanceof RuleClosureTransition ) {
LookaheadSet prev = FIRSTCache.get((NFAState)transition0.target);
if ( prev!=null ) {
tset = prev;
}
}
// if not in cache, must compute
if ( tset==null ) {
tset = _FIRST((NFAState)transition0.target, chaseFollowTransitions);
// save FIRST cache for transition 0 if rule call
if ( !chaseFollowTransitions && transition0 instanceof RuleClosureTransition ) {
FIRSTCache.put((NFAState)transition0.target, tset);
}
}
// did we fall off the end?
if ( grammar.type!=Grammar.LEXER && tset.member(Label.EOR_TOKEN_TYPE) ) {
if ( transition0 instanceof RuleClosureTransition ) {
// we called a rule that found the end of the rule.
// That means the rule is nullable and we need to
// keep looking at what follows the rule ref. E.g.,
// a : b A ; where b is nullable means that LOOK(a)
// should include A.
RuleClosureTransition ruleInvocationTrans =
(RuleClosureTransition)transition0;
// remove the EOR and get what follows
//tset.remove(Label.EOR_TOKEN_TYPE);
NFAState following = (NFAState) ruleInvocationTrans.followState;
LookaheadSet fset = _FIRST(following, chaseFollowTransitions);
fset.orInPlace(tset); // tset cached; or into new set
fset.remove(Label.EOR_TOKEN_TYPE);
tset = fset;
}
}
Transition transition1 = s.transition[1];
if ( transition1!=null ) {
LookaheadSet tset1 =
_FIRST((NFAState)transition1.target, chaseFollowTransitions);
tset1.orInPlace(tset); // tset cached; or into new set
tset = tset1;
}
return tset;
}
/** Is there a non-syn-pred predicate visible from s that is not in
* the rule enclosing s? This accounts for most predicate situations
* and lets ANTLR do a simple LL(1)+pred computation.
*
* TODO: what about gated vs regular preds?
*/
public boolean detectConfoundingPredicates(NFAState s) {
lookBusy.clear();
Rule r = s.enclosingRule;
return _detectConfoundingPredicates(s, r, false) == DETECT_PRED_FOUND;
}
protected int _detectConfoundingPredicates(NFAState s,
Rule enclosingRule,
boolean chaseFollowTransitions)
{
//System.out.println("_detectNonAutobacktrackPredicates("+s+")");
if ( !chaseFollowTransitions && s.isAcceptState() ) {
if ( grammar.type==Grammar.LEXER ) {
// FOLLOW makes no sense (at the moment!) for lexical rules.
// assume all char can follow
return DETECT_PRED_NOT_FOUND;
}
return DETECT_PRED_EOR;
}
if ( lookBusy.contains(s) ) {
// return a copy of an empty set; we may modify set inline
return DETECT_PRED_NOT_FOUND;
}
lookBusy.add(s);
Transition transition0 = s.transition[0];
if ( transition0==null ) {
return DETECT_PRED_NOT_FOUND;
}
if ( !(transition0.label.isSemanticPredicate()||
transition0.label.isEpsilon()) ) {
return DETECT_PRED_NOT_FOUND;
}
if ( transition0.label.isSemanticPredicate() ) {
//System.out.println("pred "+transition0.label);
SemanticContext ctx = transition0.label.getSemanticContext();
SemanticContext.Predicate p = (SemanticContext.Predicate)ctx;
if ( p.predicateAST.getType() != ANTLRParser.BACKTRACK_SEMPRED ) {
return DETECT_PRED_FOUND;
}
}
/*
if ( transition0.label.isSemanticPredicate() ) {
System.out.println("pred "+transition0.label);
SemanticContext ctx = transition0.label.getSemanticContext();
SemanticContext.Predicate p = (SemanticContext.Predicate)ctx;
// if a non-syn-pred found not in enclosingRule, say we found one
if ( p.predicateAST.getType() != ANTLRParser.BACKTRACK_SEMPRED &&
!p.predicateAST.enclosingRuleName.equals(enclosingRule.name) )
{
System.out.println("found pred "+p+" not in "+enclosingRule.name);
return DETECT_PRED_FOUND;
}
}
*/
int result = _detectConfoundingPredicates((NFAState)transition0.target,
enclosingRule,
chaseFollowTransitions);
if ( result == DETECT_PRED_FOUND ) {
return DETECT_PRED_FOUND;
}
if ( result == DETECT_PRED_EOR ) {
if ( transition0 instanceof RuleClosureTransition ) {
// we called a rule that found the end of the rule.
// That means the rule is nullable and we need to
// keep looking at what follows the rule ref. E.g.,
// a : b A ; where b is nullable means that LOOK(a)
// should include A.
RuleClosureTransition ruleInvocationTrans =
(RuleClosureTransition)transition0;
NFAState following = (NFAState) ruleInvocationTrans.followState;
int afterRuleResult =
_detectConfoundingPredicates(following,
enclosingRule,
chaseFollowTransitions);
if ( afterRuleResult == DETECT_PRED_FOUND ) {
return DETECT_PRED_FOUND;
}
}
}
Transition transition1 = s.transition[1];
if ( transition1!=null ) {
int t1Result =
_detectConfoundingPredicates((NFAState)transition1.target,
enclosingRule,
chaseFollowTransitions);
if ( t1Result == DETECT_PRED_FOUND ) {
return DETECT_PRED_FOUND;
}
}
return DETECT_PRED_NOT_FOUND;
}
/** Return predicate expression found via epsilon edges from s. Do
* not look into other rules for now. Do something simple. Include
* backtracking synpreds.
*/
public SemanticContext getPredicates(NFAState altStartState) {
lookBusy.clear();
return _getPredicates(altStartState, altStartState);
}
protected SemanticContext _getPredicates(NFAState s, NFAState altStartState) {
//System.out.println("_getPredicates("+s+")");
if ( s.isAcceptState() ) {
return null;
}
// avoid infinite loops from (..)* etc...
if ( lookBusy.contains(s) ) {
return null;
}
lookBusy.add(s);
Transition transition0 = s.transition[0];
// no transitions
if ( transition0==null ) {
return null;
}
// not a predicate and not even an epsilon
if ( !(transition0.label.isSemanticPredicate()||
transition0.label.isEpsilon()) ) {
return null;
}
SemanticContext p = null;
SemanticContext p0 = null;
SemanticContext p1 = null;
if ( transition0.label.isSemanticPredicate() ) {
//System.out.println("pred "+transition0.label);
p = transition0.label.getSemanticContext();
// ignore backtracking preds not on left edge for this decision
if ( ((SemanticContext.Predicate)p).predicateAST.getType() ==
ANTLRParser.BACKTRACK_SEMPRED &&
s == altStartState.transition[0].target )
{
p = null; // don't count
}
}
// get preds from beyond this state
p0 = _getPredicates((NFAState)transition0.target, altStartState);
// get preds from other transition
Transition transition1 = s.transition[1];
if ( transition1!=null ) {
p1 = _getPredicates((NFAState)transition1.target, altStartState);
}
// join this&following-right|following-down
return SemanticContext.and(p,SemanticContext.or(p0,p1));
}
}

View File

@ -0,0 +1,179 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.misc.IntervalSet;
import org.antlr.misc.MultiMap;
import org.antlr.tool.ANTLRParser;
import java.util.Iterator;
import java.util.List;
import java.util.Collections;
/** A special DFA that is exactly LL(1) or LL(1) with backtracking mode
* predicates to resolve edge set collisions.
*/
public class LL1DFA extends DFA {
/** From list of lookahead sets (one per alt in decision), create
* an LL(1) DFA. One edge per set.
*
* s0-{alt1}->:o=>1
* | \
* | -{alt2}->:o=>2
* |
* ...
*/
public LL1DFA(int decisionNumber, NFAState decisionStartState, LookaheadSet[] altLook) {
DFAState s0 = newState();
startState = s0;
nfa = decisionStartState.nfa;
nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(decisionStartState);
this.decisionNumber = decisionNumber;
this.decisionNFAStartState = decisionStartState;
initAltRelatedInfo();
unreachableAlts = null;
for (int alt=1; alt<altLook.length; alt++) {
DFAState acceptAltState = newState();
acceptAltState.acceptState = true;
setAcceptState(alt, acceptAltState);
acceptAltState.k = 1;
acceptAltState.cachedUniquelyPredicatedAlt = alt;
Label e = getLabelForSet(altLook[alt].tokenTypeSet);
s0.addTransition(acceptAltState, e);
}
}
/** From a set of edgeset->list-of-alts mappings, create a DFA
* that uses syn preds for all |list-of-alts|>1.
*/
public LL1DFA(int decisionNumber,
NFAState decisionStartState,
MultiMap<IntervalSet, Integer> edgeMap)
{
DFAState s0 = newState();
startState = s0;
nfa = decisionStartState.nfa;
nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(decisionStartState);
this.decisionNumber = decisionNumber;
this.decisionNFAStartState = decisionStartState;
initAltRelatedInfo();
unreachableAlts = null;
for (Iterator it = edgeMap.keySet().iterator(); it.hasNext();) {
IntervalSet edge = (IntervalSet)it.next();
List<Integer> alts = edgeMap.get(edge);
Collections.sort(alts); // make sure alts are attempted in order
//System.out.println(edge+" -> "+alts);
DFAState s = newState();
s.k = 1;
Label e = getLabelForSet(edge);
s0.addTransition(s, e);
if ( alts.size()==1 ) {
s.acceptState = true;
int alt = alts.get(0);
setAcceptState(alt, s);
s.cachedUniquelyPredicatedAlt = alt;
}
else {
// resolve with syntactic predicates. Add edges from
// state s that test predicates.
s.resolvedWithPredicates = true;
for (int i = 0; i < alts.size(); i++) {
int alt = (int)alts.get(i);
s.cachedUniquelyPredicatedAlt = NFA.INVALID_ALT_NUMBER;
DFAState predDFATarget = getAcceptState(alt);
if ( predDFATarget==null ) {
predDFATarget = newState(); // create if not there.
predDFATarget.acceptState = true;
predDFATarget.cachedUniquelyPredicatedAlt = alt;
setAcceptState(alt, predDFATarget);
}
// add a transition to pred target from d
/*
int walkAlt =
decisionStartState.translateDisplayAltToWalkAlt(alt);
NFAState altLeftEdge = nfa.grammar.getNFAStateForAltOfDecision(decisionStartState, walkAlt);
NFAState altStartState = (NFAState)altLeftEdge.transition[0].target;
SemanticContext ctx = nfa.grammar.ll1Analyzer.getPredicates(altStartState);
System.out.println("sem ctx = "+ctx);
if ( ctx == null ) {
ctx = new SemanticContext.TruePredicate();
}
s.addTransition(predDFATarget, new Label(ctx));
*/
SemanticContext.Predicate synpred =
getSynPredForAlt(decisionStartState, alt);
if ( synpred == null ) {
synpred = new SemanticContext.TruePredicate();
}
s.addTransition(predDFATarget, new PredicateLabel(synpred));
}
}
}
//System.out.println("dfa for preds=\n"+this);
}
protected Label getLabelForSet(IntervalSet edgeSet) {
Label e = null;
int atom = edgeSet.getSingleElement();
if ( atom != Label.INVALID ) {
e = new Label(atom);
}
else {
e = new Label(edgeSet);
}
return e;
}
protected SemanticContext.Predicate getSynPredForAlt(NFAState decisionStartState,
int alt)
{
int walkAlt =
decisionStartState.translateDisplayAltToWalkAlt(alt);
NFAState altLeftEdge =
nfa.grammar.getNFAStateForAltOfDecision(decisionStartState, walkAlt);
NFAState altStartState = (NFAState)altLeftEdge.transition[0].target;
//System.out.println("alt "+alt+" start state = "+altStartState.stateNumber);
if ( altStartState.transition[0].isSemanticPredicate() ) {
SemanticContext ctx = altStartState.transition[0].label.getSemanticContext();
if ( ctx.isSyntacticPredicate() ) {
SemanticContext.Predicate p = (SemanticContext.Predicate)ctx;
if ( p.predicateAST.getType() == ANTLRParser.BACKTRACK_SEMPRED ) {
/*
System.out.println("syn pred for alt "+walkAlt+" "+
((SemanticContext.Predicate)altStartState.transition[0].label.getSemanticContext()).predicateAST);
*/
if ( ctx.isSyntacticPredicate() ) {
nfa.grammar.synPredUsedInDFA(this, ctx);
}
return (SemanticContext.Predicate)altStartState.transition[0].label.getSemanticContext();
}
}
}
return null;
}
}

View File

@ -0,0 +1,382 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.Grammar;
import org.antlr.tool.GrammarAST;
import org.antlr.misc.IntervalSet;
import org.antlr.misc.IntSet;
/** A state machine transition label. A label can be either a simple
* label such as a token or character. A label can be a set of char or
* tokens. It can be an epsilon transition. It can be a semantic predicate
* (which assumes an epsilon transition) or a tree of predicates (in a DFA).
*/
public class Label implements Comparable, Cloneable {
public static final int INVALID = -7;
public static final int ACTION = -6;
public static final int EPSILON = -5;
public static final String EPSILON_STR = "<EPSILON>";
/** label is a semantic predicate; implies label is epsilon also */
public static final int SEMPRED = -4;
/** label is a set of tokens or char */
public static final int SET = -3;
/** End of Token is like EOF for lexer rules. It implies that no more
* characters are available and that NFA conversion should terminate
* for this path. For example
*
* A : 'a' 'b' | 'a' ;
*
* yields a DFA predictor:
*
* o-a->o-b->1 predict alt 1
* |
* |-EOT->o predict alt 2
*
* To generate code for EOT, treat it as the "default" path, which
* implies there is no way to mismatch a char for the state from
* which the EOT emanates.
*/
public static final int EOT = -2;
public static final int EOF = -1;
/** We have labels like EPSILON that are below 0; it's hard to
* store them in an array with negative index so use this
* constant as an index shift when accessing arrays based upon
* token type. If real token type is i, then array index would be
* NUM_FAUX_LABELS + i.
*/
public static final int NUM_FAUX_LABELS = -INVALID;
/** Anything at this value or larger can be considered a simple atom int
* for easy comparison during analysis only; faux labels are not used
* during parse time for real token types or char values.
*/
public static final int MIN_ATOM_VALUE = EOT;
// TODO: is 0 a valid unicode char? max is FFFF -1, right?
public static final int MIN_CHAR_VALUE = '\u0000';
public static final int MAX_CHAR_VALUE = '\uFFFE';
/** End of rule token type; imaginary token type used only for
* local, partial FOLLOW sets to indicate that the local FOLLOW
* hit the end of rule. During error recovery, the local FOLLOW
* of a token reference may go beyond the end of the rule and have
* to use FOLLOW(rule). I have to just shift the token types to 2..n
* rather than 1..n to accommodate this imaginary token in my bitsets.
* If I didn't use a bitset implementation for runtime sets, I wouldn't
* need this. EOF is another candidate for a run time token type for
* parsers. Follow sets are not computed for lexers so we do not have
* this issue.
*/
public static final int EOR_TOKEN_TYPE =
org.antlr.runtime.Token.EOR_TOKEN_TYPE;
public static final int DOWN = org.antlr.runtime.Token.DOWN;
public static final int UP = org.antlr.runtime.Token.UP;
/** tokens and char range overlap; tokens are MIN_TOKEN_TYPE..n */
public static final int MIN_TOKEN_TYPE =
org.antlr.runtime.Token.MIN_TOKEN_TYPE;
/** The wildcard '.' char atom implies all valid characters==UNICODE */
//public static final IntSet ALLCHAR = IntervalSet.of(MIN_CHAR_VALUE,MAX_CHAR_VALUE);
/** The token type or character value; or, signifies special label. */
protected int label;
/** A set of token types or character codes if label==SET */
// TODO: try IntervalSet for everything
protected IntSet labelSet;
public Label(int label) {
this.label = label;
}
/** Make a set label */
public Label(IntSet labelSet) {
if ( labelSet==null ) {
this.label = SET;
this.labelSet = IntervalSet.of(INVALID);
return;
}
int singleAtom = labelSet.getSingleElement();
if ( singleAtom!=INVALID ) {
// convert back to a single atomic element if |labelSet|==1
label = singleAtom;
return;
}
this.label = SET;
this.labelSet = labelSet;
}
public Object clone() {
Label l;
try {
l = (Label)super.clone();
l.label = this.label;
l.labelSet = new IntervalSet();
l.labelSet.addAll(this.labelSet);
}
catch (CloneNotSupportedException e) {
throw new InternalError();
}
return l;
}
public void add(Label a) {
if ( isAtom() ) {
labelSet = IntervalSet.of(label);
label=SET;
if ( a.isAtom() ) {
labelSet.add(a.getAtom());
}
else if ( a.isSet() ) {
labelSet.addAll(a.getSet());
}
else {
throw new IllegalStateException("can't add element to Label of type "+label);
}
return;
}
if ( isSet() ) {
if ( a.isAtom() ) {
labelSet.add(a.getAtom());
}
else if ( a.isSet() ) {
labelSet.addAll(a.getSet());
}
else {
throw new IllegalStateException("can't add element to Label of type "+label);
}
return;
}
throw new IllegalStateException("can't add element to Label of type "+label);
}
public boolean isAtom() {
return label>=MIN_ATOM_VALUE;
}
public boolean isEpsilon() {
return label==EPSILON;
}
public boolean isSemanticPredicate() {
return false;
}
public boolean isAction() {
return false;
}
public boolean isSet() {
return label==SET;
}
/** return the single atom label or INVALID if not a single atom */
public int getAtom() {
if ( isAtom() ) {
return label;
}
return INVALID;
}
public IntSet getSet() {
if ( label!=SET ) {
// convert single element to a set if they ask for it.
return IntervalSet.of(label);
}
return labelSet;
}
public void setSet(IntSet set) {
label=SET;
labelSet = set;
}
public SemanticContext getSemanticContext() {
return null;
}
public boolean matches(int atom) {
if ( label==atom ) {
return true; // handle the single atom case efficiently
}
if ( isSet() ) {
return labelSet.member(atom);
}
return false;
}
public boolean matches(IntSet set) {
if ( isAtom() ) {
return set.member(getAtom());
}
if ( isSet() ) {
// matches if intersection non-nil
return !getSet().and(set).isNil();
}
return false;
}
public boolean matches(Label other) {
if ( other.isSet() ) {
return matches(other.getSet());
}
if ( other.isAtom() ) {
return matches(other.getAtom());
}
return false;
}
public int hashCode() {
if (label==SET) {
return labelSet.hashCode();
}
else {
return label;
}
}
// TODO: do we care about comparing set {A} with atom A? Doesn't now.
public boolean equals(Object o) {
if ( o==null ) {
return false;
}
if ( this == o ) {
return true; // equals if same object
}
// labels must be the same even if epsilon or set or sempred etc...
if ( label!=((Label)o).label ) {
return false;
}
if ( label==SET ) {
return this.labelSet.equals(((Label)o).labelSet);
}
return true; // label values are same, so true
}
public int compareTo(Object o) {
return this.label-((Label)o).label;
}
/** Predicates are lists of AST nodes from the NFA created from the
* grammar, but the same predicate could be cut/paste into multiple
* places in the grammar. I must compare the text of all the
* predicates to truly answer whether {p1,p2} .equals {p1,p2}.
* Unfortunately, I cannot rely on the AST.equals() to work properly
* so I must do a brute force O(n^2) nested traversal of the Set
* doing a String compare.
*
* At this point, Labels are not compared for equals when they are
* predicates, but here's the code for future use.
*/
/*
protected boolean predicatesEquals(Set others) {
Iterator iter = semanticContext.iterator();
while (iter.hasNext()) {
AST predAST = (AST) iter.next();
Iterator inner = semanticContext.iterator();
while (inner.hasNext()) {
AST otherPredAST = (AST) inner.next();
if ( !predAST.getText().equals(otherPredAST.getText()) ) {
return false;
}
}
}
return true;
}
*/
public String toString() {
switch (label) {
case SET :
return labelSet.toString();
default :
return String.valueOf(label);
}
}
public String toString(Grammar g) {
switch (label) {
case SET :
return labelSet.toString(g);
default :
return g.getTokenDisplayName(label);
}
}
/*
public String predicatesToString() {
if ( semanticContext==NFAConfiguration.DEFAULT_CLAUSE_SEMANTIC_CONTEXT ) {
return "!other preds";
}
StringBuffer buf = new StringBuffer();
Iterator iter = semanticContext.iterator();
while (iter.hasNext()) {
AST predAST = (AST) iter.next();
buf.append(predAST.getText());
if ( iter.hasNext() ) {
buf.append("&");
}
}
return buf.toString();
}
*/
public static boolean intersect(Label label, Label edgeLabel) {
boolean hasIntersection = false;
boolean labelIsSet = label.isSet();
boolean edgeIsSet = edgeLabel.isSet();
if ( !labelIsSet && !edgeIsSet && edgeLabel.label==label.label ) {
hasIntersection = true;
}
else if ( labelIsSet && edgeIsSet &&
!edgeLabel.getSet().and(label.getSet()).isNil() ) {
hasIntersection = true;
}
else if ( labelIsSet && !edgeIsSet &&
label.getSet().member(edgeLabel.label) ) {
hasIntersection = true;
}
else if ( !labelIsSet && edgeIsSet &&
edgeLabel.getSet().member(label.label) ) {
hasIntersection = true;
}
return hasIntersection;
}
}

View File

@ -0,0 +1,104 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.misc.IntervalSet;
import org.antlr.misc.IntSet;
import org.antlr.tool.Grammar;
/** An LL(1) lookahead set; contains a set of token types and a "hasEOF"
* condition when the set contains EOF. Since EOF is -1 everywhere and -1
* cannot be stored in my BitSet, I set a condition here. There may be other
* reasons in the future to abstract a LookaheadSet over a raw BitSet.
*/
public class LookaheadSet {
public IntervalSet tokenTypeSet;
public LookaheadSet() {
tokenTypeSet = new IntervalSet();
}
public LookaheadSet(IntSet s) {
this();
tokenTypeSet.addAll(s);
}
public LookaheadSet(int atom) {
tokenTypeSet = IntervalSet.of(atom);
}
public void orInPlace(LookaheadSet other) {
this.tokenTypeSet.addAll(other.tokenTypeSet);
}
public LookaheadSet or(LookaheadSet other) {
return new LookaheadSet(tokenTypeSet.or(other.tokenTypeSet));
}
public LookaheadSet subtract(LookaheadSet other) {
return new LookaheadSet(this.tokenTypeSet.subtract(other.tokenTypeSet));
}
public boolean member(int a) {
return tokenTypeSet.member(a);
}
public LookaheadSet intersection(LookaheadSet s) {
IntSet i = this.tokenTypeSet.and(s.tokenTypeSet);
LookaheadSet intersection = new LookaheadSet(i);
return intersection;
}
public boolean isNil() {
return tokenTypeSet.isNil();
}
public void remove(int a) {
tokenTypeSet = (IntervalSet)tokenTypeSet.subtract(IntervalSet.of(a));
}
public int hashCode() {
return tokenTypeSet.hashCode();
}
public boolean equals(Object other) {
return tokenTypeSet.equals(((LookaheadSet)other).tokenTypeSet);
}
public String toString(Grammar g) {
if ( tokenTypeSet==null ) {
return "";
}
String r = tokenTypeSet.toString(g);
return r;
}
public String toString() {
return toString(null);
}
}

View File

@ -0,0 +1,73 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.Grammar;
import org.antlr.tool.NFAFactory;
/** An NFA (collection of NFAStates) constructed from a grammar. This
* NFA is one big machine for entire grammar. Decision points are recorded
* by the Grammar object so we can, for example, convert to DFA or simulate
* the NFA (interpret a decision).
*/
public class NFA {
public static final int INVALID_ALT_NUMBER = -1;
/** This NFA represents which grammar? */
public Grammar grammar;
/** Which factory created this NFA? */
protected NFAFactory factory = null;
public boolean complete;
public NFA(Grammar g) {
this.grammar = g;
}
public int getNewNFAStateNumber() {
return grammar.composite.getNewNFAStateNumber();
}
public void addState(NFAState state) {
grammar.composite.addState(state);
}
public NFAState getState(int s) {
return grammar.composite.getState(s);
}
public NFAFactory getFactory() {
return factory;
}
public void setFactory(NFAFactory factory) {
this.factory = factory;
}
}

View File

@ -0,0 +1,152 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.misc.Utils;
/** An NFA state, predicted alt, and syntactic/semantic context.
* The syntactic context is a pointer into the rule invocation
* chain used to arrive at the state. The semantic context is
* the unordered set semantic predicates encountered before reaching
* an NFA state.
*/
public class NFAConfiguration {
/** The NFA state associated with this configuration */
public int state;
/** What alt is predicted by this configuration */
public int alt;
/** What is the stack of rule invocations that got us to state? */
public NFAContext context;
/** The set of semantic predicates associated with this NFA
* configuration. The predicates were found on the way to
* the associated NFA state in this syntactic context.
* Set<AST>: track nodes in grammar containing the predicate
* for error messages and such (nice to know where the predicate
* came from in case of duplicates etc...). By using a set,
* the equals() method will correctly show {pred1,pred2} as equals()
* to {pred2,pred1}.
*/
public SemanticContext semanticContext = SemanticContext.EMPTY_SEMANTIC_CONTEXT;
/** Indicate that this configuration has been resolved and no further
* DFA processing should occur with it. Essentially, this is used
* as an "ignore" bit so that upon a set of nondeterministic configurations
* such as (s|2) and (s|3), I can set (s|3) to resolved=true (and any
* other configuration associated with alt 3).
*/
protected boolean resolved;
/** This bit is used to indicate a semantic predicate will be
* used to resolve the conflict. Method
* DFA.findNewDFAStatesAndAddDFATransitions will add edges for
* the predicates after it performs the reach operation. The
* nondeterminism resolver sets this when it finds a set of
* nondeterministic configurations (as it does for "resolved" field)
* that have enough predicates to resolve the conflit.
*/
protected boolean resolveWithPredicate;
/** Lots of NFA states have only epsilon edges (1 or 2). We can
* safely consider only n>0 during closure.
*/
protected int numberEpsilonTransitionsEmanatingFromState;
/** Indicates that the NFA state associated with this configuration
* has exactly one transition and it's an atom (not epsilon etc...).
*/
protected boolean singleAtomTransitionEmanating;
//protected boolean addedDuringClosure = true;
public NFAConfiguration(int state,
int alt,
NFAContext context,
SemanticContext semanticContext)
{
this.state = state;
this.alt = alt;
this.context = context;
this.semanticContext = semanticContext;
}
/** An NFA configuration is equal to another if both have
* the same state, the predict the same alternative, and
* syntactic/semantic contexts are the same. I don't think
* the state|alt|ctx could be the same and have two different
* semantic contexts, but might as well define equals to be
* everything.
*/
public boolean equals(Object o) {
if ( o==null ) {
return false;
}
NFAConfiguration other = (NFAConfiguration)o;
return this.state==other.state &&
this.alt==other.alt &&
this.context.equals(other.context)&&
this.semanticContext.equals(other.semanticContext);
}
public int hashCode() {
int h = state + alt + context.hashCode();
return h;
}
public String toString() {
return toString(true);
}
public String toString(boolean showAlt) {
StringBuffer buf = new StringBuffer();
buf.append(state);
if ( showAlt ) {
buf.append("|");
buf.append(alt);
}
if ( context.parent!=null ) {
buf.append("|");
buf.append(context);
}
if ( semanticContext!=null &&
semanticContext!=SemanticContext.EMPTY_SEMANTIC_CONTEXT ) {
buf.append("|");
String escQuote = Utils.replace(semanticContext.toString(), "\"", "\\\"");
buf.append(escQuote);
}
if ( resolved ) {
buf.append("|resolved");
}
if ( resolveWithPredicate ) {
buf.append("|resolveWithPredicate");
}
return buf.toString();
}
}

View File

@ -0,0 +1,294 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** A tree node for tracking the call chains for NFAs that invoke
* other NFAs. These trees only have to point upwards to their parents
* so we can walk back up the tree (i.e., pop stuff off the stack). We
* never walk from stack down down through the children.
*
* Each alt predicted in a decision has its own context tree,
* representing all possible return nodes. The initial stack has
* EOF ("$") in it. So, for m alternative productions, the lookahead
* DFA will have m NFAContext trees.
*
* To "push" a new context, just do "new NFAContext(context-parent, state)"
* which will add itself to the parent. The root is NFAContext(null, null).
*
* The complete context for an NFA configuration is the set of invoking states
* on the path from this node thru the parent pointers to the root.
*/
public class NFAContext {
/** This is similar to Bermudez's m constant in his LAR(m) where
* you bound the stack so your states don't explode. The main difference
* is that I bound only recursion on the stack, not the simple stack size.
* This looser constraint will let the conversion roam further to find
* lookahead to resolve a decision.
*
* Bermudez's m operates differently as it is his LR stack depth
* I'm pretty sure it therefore includes all stack symbols. Here I
* restrict the size of an NFA configuration to be finite because a
* stack component may mention the same NFA invocation state at
* most m times. Hence, the number of DFA states will not grow forever.
* With recursive rules like
*
* e : '(' e ')' | INT ;
*
* you could chase your tail forever if somebody said "s : e '.' | e ';' ;"
* This constant prevents new states from being created after a stack gets
* "too big". Actually (12/14/2007) I realize that this example is
* trapped by the non-LL(*) detector for recursion in > 1 alt. Here is
* an example that trips stack overflow:
*
* s : a Y | A A A A A X ; // force recursion past m=4
* a : A a | Q;
*
* If that were:
*
* s : a Y | A+ X ;
*
* it could loop forever.
*
* Imagine doing a depth-first search on the e DFA...as you chase an input
* sequence you can recurse to same rule such as e above. You'd have a
* chain of ((((. When you get do some point, you have to give up. The
* states in the chain will have longer and longer NFA config stacks.
* Must limit size.
*
* max=0 implies you cannot ever jump to another rule during closure.
* max=1 implies you can make as many calls as you want--you just
* can't ever visit a state that is on your rule invocation stack.
* I.e., you cannot ever recurse.
* max=2 implies you are able to recurse once (i.e., call a rule twice
* from the same place).
*
* This tracks recursion to a rule specific to an invocation site!
* It does not detect multiple calls to a rule from different rule
* invocation states. We are guaranteed to terminate because the
* stack can only grow as big as the number of NFA states * max.
*
* I noticed that the Java grammar didn't work with max=1, but did with
* max=4. Let's set to 4. Recursion is sometimes needed to resolve some
* fixed lookahead decisions.
*/
public static int MAX_SAME_RULE_INVOCATIONS_PER_NFA_CONFIG_STACK = 4;
public NFAContext parent;
/** The NFA state that invoked another rule's start state is recorded
* on the rule invocation context stack.
*/
public NFAState invokingState;
/** Computing the hashCode is very expensive and closureBusy()
* uses it to track when it's seen a state|ctx before to avoid
* infinite loops. As we add new contexts, record the hash code
* as this.invokingState + parent.cachedHashCode. Avoids walking
* up the tree for every hashCode(). Note that this caching works
* because a context is a monotonically growing tree of context nodes
* and nothing on the stack is ever modified...ctx just grows
* or shrinks.
*/
protected int cachedHashCode;
public NFAContext(NFAContext parent, NFAState invokingState) {
this.parent = parent;
this.invokingState = invokingState;
if ( invokingState!=null ) {
this.cachedHashCode = invokingState.stateNumber;
}
if ( parent!=null ) {
this.cachedHashCode += parent.cachedHashCode;
}
}
/** Two contexts are equals() if both have
* same call stack; walk upwards to the root.
* Recall that the root sentinel node has no invokingStates and no parent.
* Note that you may be comparing contexts in different alt trees.
*
* The hashCode is now cheap as it's computed once upon each context
* push on the stack. Use it to make equals() more efficient.
*/
public boolean equals(Object o) {
NFAContext other = ((NFAContext)o);
if ( this.cachedHashCode != other.cachedHashCode ) {
return false; // can't be same if hash is different
}
if ( this==other ) {
return true;
}
// System.out.println("comparing "+this+" with "+other);
NFAContext sp = this;
while ( sp.parent!=null && other.parent!=null ) {
if ( sp.invokingState != other.invokingState ) {
return false;
}
sp = sp.parent;
other = other.parent;
}
if ( !(sp.parent==null && other.parent==null) ) {
return false; // both pointers must be at their roots after walk
}
return true;
}
/** Two contexts conflict() if they are equals() or one is a stack suffix
* of the other. For example, contexts [21 12 $] and [21 9 $] do not
* conflict, but [21 $] and [21 12 $] do conflict. Note that I should
* probably not show the $ in this case. There is a dummy node for each
* stack that just means empty; $ is a marker that's all.
*
* This is used in relation to checking conflicts associated with a
* single NFA state's configurations within a single DFA state.
* If there are configurations s and t within a DFA state such that
* s.state=t.state && s.alt != t.alt && s.ctx conflicts t.ctx then
* the DFA state predicts more than a single alt--it's nondeterministic.
* Two contexts conflict if they are the same or if one is a suffix
* of the other.
*
* When comparing contexts, if one context has a stack and the other
* does not then they should be considered the same context. The only
* way for an NFA state p to have an empty context and a nonempty context
* is the case when closure falls off end of rule without a call stack
* and re-enters the rule with a context. This resolves the issue I
* discussed with Sriram Srinivasan Feb 28, 2005 about not terminating
* fast enough upon nondeterminism.
*/
public boolean conflictsWith(NFAContext other) {
return this.suffix(other); // || this.equals(other);
}
/** [$] suffix any context
* [21 $] suffix [21 12 $]
* [21 12 $] suffix [21 $]
* [21 18 $] suffix [21 18 12 9 $]
* [21 18 12 9 $] suffix [21 18 $]
* [21 12 $] not suffix [21 9 $]
*
* Example "[21 $] suffix [21 12 $]" means: rule r invoked current rule
* from state 21. Rule s invoked rule r from state 12 which then invoked
* current rule also via state 21. While the context prior to state 21
* is different, the fact that both contexts emanate from state 21 implies
* that they are now going to track perfectly together. Once they
* converged on state 21, there is no way they can separate. In other
* words, the prior stack state is not consulted when computing where to
* go in the closure operation. ?$ and ??$ are considered the same stack.
* If ? is popped off then $ and ?$ remain; they are now an empty and
* nonempty context comparison. So, if one stack is a suffix of
* another, then it will still degenerate to the simple empty stack
* comparison case.
*/
protected boolean suffix(NFAContext other) {
NFAContext sp = this;
// if one of the contexts is empty, it never enters loop and returns true
while ( sp.parent!=null && other.parent!=null ) {
if ( sp.invokingState != other.invokingState ) {
return false;
}
sp = sp.parent;
other = other.parent;
}
//System.out.println("suffix");
return true;
}
/** Walk upwards to the root of the call stack context looking
* for a particular invoking state.
public boolean contains(int state) {
NFAContext sp = this;
int n = 0; // track recursive invocations of state
System.out.println("this.context is "+sp);
while ( sp.parent!=null ) {
if ( sp.invokingState.stateNumber == state ) {
return true;
}
sp = sp.parent;
}
return false;
}
*/
/** Given an NFA state number, how many times has the NFA-to-DFA
* conversion pushed that state on the stack? In other words,
* the NFA state must be a rule invocation state and this method
* tells you how many times you've been to this state. If none,
* then you have not called the target rule from this state before
* (though another NFA state could have called that target rule).
* If n=1, then you've been to this state before during this
* DFA construction and are going to invoke that rule again.
*
* Note that many NFA states can invoke rule r, but we ignore recursion
* unless you hit the same rule invocation state again.
*/
public int recursionDepthEmanatingFromState(int state) {
NFAContext sp = this;
int n = 0; // track recursive invocations of target from this state
//System.out.println("this.context is "+sp);
while ( sp.parent!=null ) {
if ( sp.invokingState.stateNumber == state ) {
n++;
}
sp = sp.parent;
}
return n;
}
public int hashCode() {
return cachedHashCode;
/*
int h = 0;
NFAContext sp = this;
while ( sp.parent!=null ) {
h += sp.invokingState.getStateNumber();
sp = sp.parent;
}
return h;
*/
}
/** A context is empty if there is no parent; meaning nobody pushed
* anything on the call stack.
*/
public boolean isEmpty() {
return parent==null;
}
public String toString() {
StringBuffer buf = new StringBuffer();
NFAContext sp = this;
buf.append("[");
while ( sp.parent!=null ) {
buf.append(sp.invokingState.stateNumber);
buf.append(" ");
sp = sp.parent;
}
buf.append("$]");
return buf.toString();
}
}

View File

@ -0,0 +1,65 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.misc.Barrier;
import org.antlr.tool.Grammar;
import org.antlr.tool.ErrorManager;
/** Convert all decisions i..j inclusive in a thread */
public class NFAConversionThread implements Runnable {
Grammar grammar;
int i, j;
Barrier barrier;
public NFAConversionThread(Grammar grammar,
Barrier barrier,
int i,
int j)
{
this.grammar = grammar;
this.barrier = barrier;
this.i = i;
this.j = j;
}
public void run() {
for (int decision=i; decision<=j; decision++) {
NFAState decisionStartState = grammar.getDecisionNFAStartState(decision);
if ( decisionStartState.getNumberOfTransitions()>1 ) {
grammar.createLookaheadDFA(decision,true);
}
}
// now wait for others to finish
try {
barrier.waitForRelease();
}
catch(InterruptedException e) {
ErrorManager.internalError("what the hell? DFA interruptus", e);
}
}
}

View File

@ -0,0 +1,259 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.GrammarAST;
import org.antlr.tool.Rule;
import org.antlr.tool.ErrorManager;
/** A state within an NFA. At most 2 transitions emanate from any NFA state. */
public class NFAState extends State {
// I need to distinguish between NFA decision states for (...)* and (...)+
// during NFA interpretation.
public static final int LOOPBACK = 1;
public static final int BLOCK_START = 2;
public static final int OPTIONAL_BLOCK_START = 3;
public static final int BYPASS = 4;
public static final int RIGHT_EDGE_OF_BLOCK = 5;
public static final int MAX_TRANSITIONS = 2;
/** How many transitions; 0, 1, or 2 transitions */
int numTransitions = 0;
public Transition[] transition = new Transition[MAX_TRANSITIONS];
/** For o-A->o type NFA tranitions, record the label that leads to this
* state. Useful for creating rich error messages when we find
* insufficiently (with preds) covered states.
*/
public Label incidentEdgeLabel;
/** Which NFA are we in? */
public NFA nfa = null;
/** What's its decision number from 1..n? */
protected int decisionNumber = 0;
/** Subrules (...)* and (...)+ have more than one decision point in
* the NFA created for them. They both have a loop-exit-or-stay-in
* decision node (the loop back node). They both have a normal
* alternative block decision node at the left edge. The (...)* is
* worse as it even has a bypass decision (2 alts: stay in or bypass)
* node at the extreme left edge. This is not how they get generated
* in code as a while-loop or whatever deals nicely with either. For
* error messages (where I need to print the nondeterministic alts)
* and for interpretation, I need to use the single DFA that is created
* (for efficiency) but interpret the results differently depending
* on which of the 2 or 3 decision states uses the DFA. For example,
* the DFA will always report alt n+1 as the exit branch for n real
* alts, so I need to translate that depending on the decision state.
*
* If decisionNumber>0 then this var tells you what kind of decision
* state it is.
*/
public int decisionStateType;
/** What rule do we live in? */
public Rule enclosingRule;
/** During debugging and for nondeterminism warnings, it's useful
* to know what relationship this node has to the original grammar.
* For example, "start of alt 1 of rule a".
*/
protected String description;
/** Associate this NFAState with the corresponding GrammarAST node
* from which this node was created. This is useful not only for
* associating the eventual lookahead DFA with the associated
* Grammar position, but also for providing users with
* nondeterminism warnings. Mainly used by decision states to
* report line:col info. Could also be used to track line:col
* for elements such as token refs.
*/
public GrammarAST associatedASTNode;
/** Is this state the sole target of an EOT transition? */
protected boolean EOTTargetState = false;
/** Jean Bovet needs in the GUI to know which state pairs correspond
* to the start/stop of a block.
*/
public int endOfBlockStateNumber = State.INVALID_STATE_NUMBER;
public NFAState(NFA nfa) {
this.nfa = nfa;
}
public int getNumberOfTransitions() {
return numTransitions;
}
public void addTransition(Transition e) {
if ( e==null ) {
throw new IllegalArgumentException("You can't add a null transition");
}
if ( numTransitions>transition.length ) {
throw new IllegalArgumentException("You can only have "+transition.length+" transitions");
}
if ( e!=null ) {
transition[numTransitions] = e;
numTransitions++;
// Set the "back pointer" of the target state so that it
// knows about the label of the incoming edge.
Label label = e.label;
if ( label.isAtom() || label.isSet() ) {
if ( ((NFAState)e.target).incidentEdgeLabel!=null ) {
ErrorManager.internalError("Clobbered incident edge");
}
((NFAState)e.target).incidentEdgeLabel = e.label;
}
}
}
/** Used during optimization to reset a state to have the (single)
* transition another state has.
*/
public void setTransition0(Transition e) {
if ( e==null ) {
throw new IllegalArgumentException("You can't use a solitary null transition");
}
transition[0] = e;
transition[1] = null;
numTransitions = 1;
}
public Transition transition(int i) {
return transition[i];
}
/** The DFA decision for this NFA decision state always has
* an exit path for loops as n+1 for n alts in the loop.
* That is really useful for displaying nondeterministic alts
* and so on, but for walking the NFA to get a sequence of edge
* labels or for actually parsing, we need to get the real alt
* number. The real alt number for exiting a loop is always 1
* as transition 0 points at the exit branch (we compute DFAs
* always for loops at the loopback state).
*
* For walking/parsing the loopback state:
* 1 2 3 display alt (for human consumption)
* 2 3 1 walk alt
*
* For walking the block start:
* 1 2 3 display alt
* 1 2 3
*
* For walking the bypass state of a (...)* loop:
* 1 2 3 display alt
* 1 1 2 all block alts map to entering loop exit means take bypass
*
* Non loop EBNF do not need to be translated; they are ignored by
* this method as decisionStateType==0.
*
* Return same alt if we can't translate.
*/
public int translateDisplayAltToWalkAlt(int displayAlt) {
NFAState nfaStart = this;
if ( decisionNumber==0 || decisionStateType==0 ) {
return displayAlt;
}
int walkAlt = 0;
// find the NFA loopback state associated with this DFA
// and count number of alts (all alt numbers are computed
// based upon the loopback's NFA state.
/*
DFA dfa = nfa.grammar.getLookaheadDFA(decisionNumber);
if ( dfa==null ) {
ErrorManager.internalError("can't get DFA for decision "+decisionNumber);
}
*/
int nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(nfaStart);
switch ( nfaStart.decisionStateType ) {
case LOOPBACK :
walkAlt = displayAlt % nAlts + 1; // rotate right mod 1..3
break;
case BLOCK_START :
case OPTIONAL_BLOCK_START :
walkAlt = displayAlt; // identity transformation
break;
case BYPASS :
if ( displayAlt == nAlts ) {
walkAlt = 2; // bypass
}
else {
walkAlt = 1; // any non exit branch alt predicts entering
}
break;
}
return walkAlt;
}
// Setter/Getters
/** What AST node is associated with this NFAState? When you
* set the AST node, I set the node to point back to this NFA state.
*/
public void setDecisionASTNode(GrammarAST decisionASTNode) {
decisionASTNode.setNFAStartState(this);
this.associatedASTNode = decisionASTNode;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public int getDecisionNumber() {
return decisionNumber;
}
public void setDecisionNumber(int decisionNumber) {
this.decisionNumber = decisionNumber;
}
public boolean isEOTTargetState() {
return EOTTargetState;
}
public void setEOTTargetState(boolean eot) {
EOTTargetState = eot;
}
public boolean isDecisionState() {
return decisionStateType>0;
}
public String toString() {
return String.valueOf(stateNumber);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,38 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** Used to abort DFA construction when we find non-LL(*) decision; i.e.,
* a decision that has recursion in more than a single alt.
*/
public class NonLLStarDecisionException extends RuntimeException {
public DFA abortedDFA;
public NonLLStarDecisionException(DFA abortedDFA) {
this.abortedDFA = abortedDFA;
}
}

View File

@ -0,0 +1,85 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.GrammarAST;
import org.antlr.tool.Grammar;
public class PredicateLabel extends Label {
/** A tree of semantic predicates from the grammar AST if label==SEMPRED.
* In the NFA, labels will always be exactly one predicate, but the DFA
* may have to combine a bunch of them as it collects predicates from
* multiple NFA configurations into a single DFA state.
*/
protected SemanticContext semanticContext;
/** Make a semantic predicate label */
public PredicateLabel(GrammarAST predicateASTNode) {
super(SEMPRED);
this.semanticContext = new SemanticContext.Predicate(predicateASTNode);
}
/** Make a semantic predicates label */
public PredicateLabel(SemanticContext semCtx) {
super(SEMPRED);
this.semanticContext = semCtx;
}
public int hashCode() {
return semanticContext.hashCode();
}
public boolean equals(Object o) {
if ( o==null ) {
return false;
}
if ( this == o ) {
return true; // equals if same object
}
if ( !(o instanceof PredicateLabel) ) {
return false;
}
return semanticContext.equals(((PredicateLabel)o).semanticContext);
}
public boolean isSemanticPredicate() {
return true;
}
public SemanticContext getSemanticContext() {
return semanticContext;
}
public String toString() {
return "{"+semanticContext+"}?";
}
public String toString(Grammar g) {
return toString();
}
}

View File

@ -0,0 +1,55 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.tool.Grammar;
import org.antlr.tool.Rule;
/** A transition used to reference another rule. It tracks two targets
* really: the actual transition target and the state following the
* state that refers to the other rule. Conversion of an NFA that
* falls off the end of a rule will be able to figure out who invoked
* that rule because of these special transitions.
*/
public class RuleClosureTransition extends Transition {
/** Ptr to the rule definition object for this rule ref */
public Rule rule;
/** What node to begin computations following ref to rule */
public NFAState followState;
public RuleClosureTransition(Rule rule,
NFAState ruleStart,
NFAState followState)
{
super(Label.EPSILON, ruleStart);
this.rule = rule;
this.followState = followState;
}
}

View File

@ -0,0 +1,486 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.stringtemplate.StringTemplateGroup;
import org.antlr.codegen.CodeGenerator;
import org.antlr.tool.ANTLRParser;
import org.antlr.tool.GrammarAST;
import org.antlr.tool.Grammar;
import java.util.Set;
import java.util.HashSet;
import java.util.Iterator;
/** A binary tree structure used to record the semantic context in which
* an NFA configuration is valid. It's either a single predicate or
* a tree representing an operation tree such as: p1&&p2 or p1||p2.
*
* For NFA o-p1->o-p2->o, create tree AND(p1,p2).
* For NFA (1)-p1->(2)
* | ^
* | |
* (3)-p2----
* we will have to combine p1 and p2 into DFA state as we will be
* adding NFA configurations for state 2 with two predicates p1,p2.
* So, set context for combined NFA config for state 2: OR(p1,p2).
*
* I have scoped the AND, NOT, OR, and Predicate subclasses of
* SemanticContext within the scope of this outer class.
*
* July 7, 2006: TJP altered OR to be set of operands. the Binary tree
* made it really hard to reduce complicated || sequences to their minimum.
* Got huge repeated || conditions.
*/
public abstract class SemanticContext {
/** Create a default value for the semantic context shared among all
* NFAConfigurations that do not have an actual semantic context.
* This prevents lots of if!=null type checks all over; it represents
* just an empty set of predicates.
*/
public static final SemanticContext EMPTY_SEMANTIC_CONTEXT = new Predicate();
/** Given a semantic context expression tree, return a tree with all
* nongated predicates set to true and then reduced. So p&&(q||r) would
* return p&&r if q is nongated but p and r are gated.
*/
public abstract SemanticContext getGatedPredicateContext();
/** Generate an expression that will evaluate the semantic context,
* given a set of output templates.
*/
public abstract StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa);
public abstract boolean isSyntacticPredicate();
/** Notify the indicated grammar of any syn preds used within this context */
public void trackUseOfSyntacticPredicates(Grammar g) {
}
public static class Predicate extends SemanticContext {
/** The AST node in tree created from the grammar holding the predicate */
public GrammarAST predicateAST;
/** Is this a {...}?=> gating predicate or a normal disambiguating {..}?
* If any predicate in expression is gated, then expression is considered
* gated.
*
* The simple Predicate object's predicate AST's type is used to set
* gated to true if type==GATED_SEMPRED.
*/
protected boolean gated = false;
/** syntactic predicates are converted to semantic predicates
* but synpreds are generated slightly differently.
*/
protected boolean synpred = false;
public static final int INVALID_PRED_VALUE = -1;
public static final int FALSE_PRED = 0;
public static final int TRUE_PRED = 1;
/** sometimes predicates are known to be true or false; we need
* a way to represent this without resorting to a target language
* value like true or TRUE.
*/
protected int constantValue = INVALID_PRED_VALUE;
public Predicate() {
predicateAST = new GrammarAST();
this.gated=false;
}
public Predicate(GrammarAST predicate) {
this.predicateAST = predicate;
this.gated =
predicate.getType()==ANTLRParser.GATED_SEMPRED ||
predicate.getType()==ANTLRParser.SYN_SEMPRED ;
this.synpred =
predicate.getType()==ANTLRParser.SYN_SEMPRED ||
predicate.getType()==ANTLRParser.BACKTRACK_SEMPRED;
}
public Predicate(Predicate p) {
this.predicateAST = p.predicateAST;
this.gated = p.gated;
this.synpred = p.synpred;
this.constantValue = p.constantValue;
}
/** Two predicates are the same if they are literally the same
* text rather than same node in the grammar's AST.
* Or, if they have the same constant value, return equal.
* As of July 2006 I'm not sure these are needed.
*/
public boolean equals(Object o) {
if ( !(o instanceof Predicate) ) {
return false;
}
return predicateAST.getText().equals(((Predicate)o).predicateAST.getText());
}
public int hashCode() {
if ( predicateAST ==null ) {
return 0;
}
return predicateAST.getText().hashCode();
}
public StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa)
{
StringTemplate eST = null;
if ( templates!=null ) {
if ( synpred ) {
eST = templates.getInstanceOf("evalSynPredicate");
}
else {
eST = templates.getInstanceOf("evalPredicate");
generator.grammar.decisionsWhoseDFAsUsesSemPreds.add(dfa);
}
String predEnclosingRuleName = predicateAST.enclosingRuleName;
/*
String decisionEnclosingRuleName =
dfa.getNFADecisionStartState().getEnclosingRule();
// if these rulenames are diff, then pred was hoisted out of rule
// Currently I don't warn you about this as it could be annoying.
// I do the translation anyway.
*/
//eST.setAttribute("pred", this.toString());
if ( generator!=null ) {
eST.setAttribute("pred",
generator.translateAction(predEnclosingRuleName,predicateAST));
}
}
else {
eST = new StringTemplate("$pred$");
eST.setAttribute("pred", this.toString());
return eST;
}
if ( generator!=null ) {
String description =
generator.target.getTargetStringLiteralFromString(this.toString());
eST.setAttribute("description", description);
}
return eST;
}
public SemanticContext getGatedPredicateContext() {
if ( gated ) {
return this;
}
return null;
}
public boolean isSyntacticPredicate() {
return predicateAST !=null &&
( predicateAST.getType()==ANTLRParser.SYN_SEMPRED ||
predicateAST.getType()==ANTLRParser.BACKTRACK_SEMPRED );
}
public void trackUseOfSyntacticPredicates(Grammar g) {
if ( synpred ) {
g.synPredNamesUsedInDFA.add(predicateAST.getText());
}
}
public String toString() {
if ( predicateAST ==null ) {
return "<nopred>";
}
return predicateAST.getText();
}
}
public static class TruePredicate extends Predicate {
public TruePredicate() {
super();
this.constantValue = TRUE_PRED;
}
public StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa)
{
if ( templates!=null ) {
return templates.getInstanceOf("true");
}
return new StringTemplate("true");
}
public String toString() {
return "true"; // not used for code gen, just DOT and print outs
}
}
/*
public static class FalsePredicate extends Predicate {
public FalsePredicate() {
super();
this.constantValue = FALSE_PRED;
}
public StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa)
{
if ( templates!=null ) {
return templates.getInstanceOf("false");
}
return new StringTemplate("false");
}
public String toString() {
return "false"; // not used for code gen, just DOT and print outs
}
}
*/
public static class AND extends SemanticContext {
protected SemanticContext left,right;
public AND(SemanticContext a, SemanticContext b) {
this.left = a;
this.right = b;
}
public StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa)
{
StringTemplate eST = null;
if ( templates!=null ) {
eST = templates.getInstanceOf("andPredicates");
}
else {
eST = new StringTemplate("($left$&&$right$)");
}
eST.setAttribute("left", left.genExpr(generator,templates,dfa));
eST.setAttribute("right", right.genExpr(generator,templates,dfa));
return eST;
}
public SemanticContext getGatedPredicateContext() {
SemanticContext gatedLeft = left.getGatedPredicateContext();
SemanticContext gatedRight = right.getGatedPredicateContext();
if ( gatedLeft==null ) {
return gatedRight;
}
if ( gatedRight==null ) {
return gatedLeft;
}
return new AND(gatedLeft, gatedRight);
}
public boolean isSyntacticPredicate() {
return left.isSyntacticPredicate()||right.isSyntacticPredicate();
}
public void trackUseOfSyntacticPredicates(Grammar g) {
left.trackUseOfSyntacticPredicates(g);
right.trackUseOfSyntacticPredicates(g);
}
public String toString() {
return "("+left+"&&"+right+")";
}
}
public static class OR extends SemanticContext {
protected Set operands;
public OR(SemanticContext a, SemanticContext b) {
operands = new HashSet();
if ( a instanceof OR ) {
operands.addAll(((OR)a).operands);
}
else if ( a!=null ) {
operands.add(a);
}
if ( b instanceof OR ) {
operands.addAll(((OR)b).operands);
}
else if ( b!=null ) {
operands.add(b);
}
}
public StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa)
{
StringTemplate eST = null;
if ( templates!=null ) {
eST = templates.getInstanceOf("orPredicates");
}
else {
eST = new StringTemplate("($first(operands)$$rest(operands):{o | ||$o$}$)");
}
for (Iterator it = operands.iterator(); it.hasNext();) {
SemanticContext semctx = (SemanticContext) it.next();
eST.setAttribute("operands", semctx.genExpr(generator,templates,dfa));
}
return eST;
}
public SemanticContext getGatedPredicateContext() {
SemanticContext result = null;
for (Iterator it = operands.iterator(); it.hasNext();) {
SemanticContext semctx = (SemanticContext) it.next();
SemanticContext gatedPred = semctx.getGatedPredicateContext();
if ( gatedPred!=null ) {
result = or(result, gatedPred);
// result = new OR(result, gatedPred);
}
}
return result;
}
public boolean isSyntacticPredicate() {
for (Iterator it = operands.iterator(); it.hasNext();) {
SemanticContext semctx = (SemanticContext) it.next();
if ( semctx.isSyntacticPredicate() ) {
return true;
}
}
return false;
}
public void trackUseOfSyntacticPredicates(Grammar g) {
for (Iterator it = operands.iterator(); it.hasNext();) {
SemanticContext semctx = (SemanticContext) it.next();
semctx.trackUseOfSyntacticPredicates(g);
}
}
public String toString() {
StringBuffer buf = new StringBuffer();
buf.append("(");
int i = 0;
for (Iterator it = operands.iterator(); it.hasNext();) {
SemanticContext semctx = (SemanticContext) it.next();
if ( i>0 ) {
buf.append("||");
}
buf.append(semctx.toString());
i++;
}
buf.append(")");
return buf.toString();
}
}
public static class NOT extends SemanticContext {
protected SemanticContext ctx;
public NOT(SemanticContext ctx) {
this.ctx = ctx;
}
public StringTemplate genExpr(CodeGenerator generator,
StringTemplateGroup templates,
DFA dfa)
{
StringTemplate eST = null;
if ( templates!=null ) {
eST = templates.getInstanceOf("notPredicate");
}
else {
eST = new StringTemplate("?!($pred$)");
}
eST.setAttribute("pred", ctx.genExpr(generator,templates,dfa));
return eST;
}
public SemanticContext getGatedPredicateContext() {
SemanticContext p = ctx.getGatedPredicateContext();
if ( p==null ) {
return null;
}
return new NOT(p);
}
public boolean isSyntacticPredicate() {
return ctx.isSyntacticPredicate();
}
public void trackUseOfSyntacticPredicates(Grammar g) {
ctx.trackUseOfSyntacticPredicates(g);
}
public boolean equals(Object object) {
if ( !(object instanceof NOT) ) {
return false;
}
return this.ctx.equals(((NOT)object).ctx);
}
public String toString() {
return "!("+ctx+")";
}
}
public static SemanticContext and(SemanticContext a, SemanticContext b) {
//System.out.println("AND: "+a+"&&"+b);
if ( a==EMPTY_SEMANTIC_CONTEXT || a==null ) {
return b;
}
if ( b==EMPTY_SEMANTIC_CONTEXT || b==null ) {
return a;
}
if ( a.equals(b) ) {
return a; // if same, just return left one
}
//System.out.println("## have to AND");
return new AND(a,b);
}
public static SemanticContext or(SemanticContext a, SemanticContext b) {
//System.out.println("OR: "+a+"||"+b);
if ( a==EMPTY_SEMANTIC_CONTEXT || a==null ) {
return b;
}
if ( b==EMPTY_SEMANTIC_CONTEXT || b==null ) {
return a;
}
if ( a instanceof TruePredicate ) {
return a;
}
if ( b instanceof TruePredicate ) {
return b;
}
if ( a instanceof NOT && b instanceof Predicate ) {
NOT n = (NOT)a;
// check for !p||p
if ( n.ctx.equals(b) ) {
return new TruePredicate();
}
}
else if ( b instanceof NOT && a instanceof Predicate ) {
NOT n = (NOT)b;
// check for p||!p
if ( n.ctx.equals(a) ) {
return new TruePredicate();
}
}
else if ( a.equals(b) ) {
return a;
}
//System.out.println("## have to OR");
return new OR(a,b);
}
public static SemanticContext not(SemanticContext a) {
return new NOT(a);
}
}

View File

@ -0,0 +1,54 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** A generic state machine state. */
public abstract class State {
public static final int INVALID_STATE_NUMBER = -1;
public int stateNumber = INVALID_STATE_NUMBER;
/** An accept state is an end of rule state for lexers and
* parser grammar rules.
*/
protected boolean acceptState = false;
public abstract int getNumberOfTransitions();
public abstract void addTransition(Transition e);
public abstract Transition transition(int i);
public boolean isAcceptState() {
return acceptState;
}
public void setAcceptState(boolean acceptState) {
this.acceptState = acceptState;
}
}

View File

@ -0,0 +1,41 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** A Cluster object points to the left/right (start and end) states of a
* state machine. Used to build NFAs.
*/
public class StateCluster {
public NFAState left;
public NFAState right;
public StateCluster(NFAState left, NFAState right) {
this.left = left;
this.right = right;
}
}

View File

@ -0,0 +1,84 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.analysis;
/** A generic transition between any two state machine states. It defines
* some special labels that indicate things like epsilon transitions and
* that the label is actually a set of labels or a semantic predicate.
* This is a one way link. It emanates from a state (usually via a list of
* transitions) and has a label/target pair. I have abstracted the notion
* of a Label to handle the various kinds of things it can be.
*/
public class Transition implements Comparable {
/** What label must be consumed to transition to target */
public Label label;
/** The target of this transition */
public State target;
public Transition(Label label, State target) {
this.label = label;
this.target = target;
}
public Transition(int label, State target) {
this.label = new Label(label);
this.target = target;
}
public boolean isEpsilon() {
return label.isEpsilon();
}
public boolean isAction() {
return label.isAction();
}
public boolean isSemanticPredicate() {
return label.isSemanticPredicate();
}
public int hashCode() {
return label.hashCode() + target.stateNumber;
}
public boolean equals(Object o) {
Transition other = (Transition)o;
return this.label.equals(other.label) &&
this.target.equals(other.target);
}
public int compareTo(Object o) {
Transition other = (Transition)o;
return this.label.compareTo(other.label);
}
public String toString() {
return label+"->"+target.stateNumber;
}
}

View File

@ -0,0 +1,190 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.analysis.*;
import org.antlr.misc.Utils;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.stringtemplate.StringTemplateGroup;
import java.util.List;
public class ACyclicDFACodeGenerator {
protected CodeGenerator parentGenerator;
public ACyclicDFACodeGenerator(CodeGenerator parent) {
this.parentGenerator = parent;
}
public StringTemplate genFixedLookaheadDecision(StringTemplateGroup templates,
DFA dfa)
{
return walkFixedDFAGeneratingStateMachine(templates, dfa, dfa.startState, 1);
}
protected StringTemplate walkFixedDFAGeneratingStateMachine(
StringTemplateGroup templates,
DFA dfa,
DFAState s,
int k)
{
//System.out.println("walk "+s.stateNumber+" in dfa for decision "+dfa.decisionNumber);
if ( s.isAcceptState() ) {
StringTemplate dfaST = templates.getInstanceOf("dfaAcceptState");
dfaST.setAttribute("alt", Utils.integer(s.getUniquelyPredictedAlt()));
return dfaST;
}
// the default templates for generating a state and its edges
// can be an if-then-else structure or a switch
String dfaStateName = "dfaState";
String dfaLoopbackStateName = "dfaLoopbackState";
String dfaOptionalBlockStateName = "dfaOptionalBlockState";
String dfaEdgeName = "dfaEdge";
if ( parentGenerator.canGenerateSwitch(s) ) {
dfaStateName = "dfaStateSwitch";
dfaLoopbackStateName = "dfaLoopbackStateSwitch";
dfaOptionalBlockStateName = "dfaOptionalBlockStateSwitch";
dfaEdgeName = "dfaEdgeSwitch";
}
StringTemplate dfaST = templates.getInstanceOf(dfaStateName);
if ( dfa.getNFADecisionStartState().decisionStateType==NFAState.LOOPBACK ) {
dfaST = templates.getInstanceOf(dfaLoopbackStateName);
}
else if ( dfa.getNFADecisionStartState().decisionStateType==NFAState.OPTIONAL_BLOCK_START ) {
dfaST = templates.getInstanceOf(dfaOptionalBlockStateName);
}
dfaST.setAttribute("k", Utils.integer(k));
dfaST.setAttribute("stateNumber", Utils.integer(s.stateNumber));
dfaST.setAttribute("semPredState",
Boolean.valueOf(s.isResolvedWithPredicates()));
/*
String description = dfa.getNFADecisionStartState().getDescription();
description = parentGenerator.target.getTargetStringLiteralFromString(description);
//System.out.println("DFA: "+description+" associated with AST "+dfa.getNFADecisionStartState());
if ( description!=null ) {
dfaST.setAttribute("description", description);
}
*/
int EOTPredicts = NFA.INVALID_ALT_NUMBER;
DFAState EOTTarget = null;
//System.out.println("DFA state "+s.stateNumber);
for (int i = 0; i < s.getNumberOfTransitions(); i++) {
Transition edge = (Transition) s.transition(i);
//System.out.println("edge "+s.stateNumber+"-"+edge.label.toString()+"->"+edge.target.stateNumber);
if ( edge.label.getAtom()==Label.EOT ) {
// don't generate a real edge for EOT; track alt EOT predicts
// generate that prediction in the else clause as default case
EOTTarget = (DFAState)edge.target;
EOTPredicts = EOTTarget.getUniquelyPredictedAlt();
/*
System.out.println("DFA s"+s.stateNumber+" EOT goes to s"+
edge.target.stateNumber+" predicates alt "+
EOTPredicts);
*/
continue;
}
StringTemplate edgeST = templates.getInstanceOf(dfaEdgeName);
// If the template wants all the label values delineated, do that
if ( edgeST.getFormalArgument("labels")!=null ) {
List labels = edge.label.getSet().toList();
for (int j = 0; j < labels.size(); j++) {
Integer vI = (Integer) labels.get(j);
String label =
parentGenerator.getTokenTypeAsTargetLabel(vI.intValue());
labels.set(j, label); // rewrite List element to be name
}
edgeST.setAttribute("labels", labels);
}
else { // else create an expression to evaluate (the general case)
edgeST.setAttribute("labelExpr",
parentGenerator.genLabelExpr(templates,edge,k));
}
// stick in any gated predicates for any edge if not already a pred
if ( !edge.label.isSemanticPredicate() ) {
DFAState target = (DFAState)edge.target;
SemanticContext preds =
target.getGatedPredicatesInNFAConfigurations();
if ( preds!=null ) {
//System.out.println("preds="+target.getGatedPredicatesInNFAConfigurations());
StringTemplate predST = preds.genExpr(parentGenerator,
parentGenerator.getTemplates(),
dfa);
edgeST.setAttribute("predicates", predST);
}
}
StringTemplate targetST =
walkFixedDFAGeneratingStateMachine(templates,
dfa,
(DFAState)edge.target,
k+1);
edgeST.setAttribute("targetState", targetST);
dfaST.setAttribute("edges", edgeST);
/*
System.out.println("back to DFA "+
dfa.decisionNumber+"."+s.stateNumber);
*/
}
// HANDLE EOT EDGE
if ( EOTPredicts!=NFA.INVALID_ALT_NUMBER ) {
// EOT unique predicts an alt
dfaST.setAttribute("eotPredictsAlt", Utils.integer(EOTPredicts));
}
else if ( EOTTarget!=null && EOTTarget.getNumberOfTransitions()>0 ) {
// EOT state has transitions so must split on predicates.
// Generate predicate else-if clauses and then generate
// NoViableAlt exception as else clause.
// Note: these predicates emanate from the EOT target state
// rather than the current DFAState s so the error message
// might be slightly misleading if you are looking at the
// state number. Predicates emanating from EOT targets are
// hoisted up to the state that has the EOT edge.
for (int i = 0; i < EOTTarget.getNumberOfTransitions(); i++) {
Transition predEdge = (Transition)EOTTarget.transition(i);
StringTemplate edgeST = templates.getInstanceOf(dfaEdgeName);
edgeST.setAttribute("labelExpr",
parentGenerator.genSemanticPredicateExpr(templates,predEdge));
// the target must be an accept state
//System.out.println("EOT edge");
StringTemplate targetST =
walkFixedDFAGeneratingStateMachine(templates,
dfa,
(DFAState)predEdge.target,
k+1);
edgeST.setAttribute("targetState", targetST);
dfaST.setAttribute("edges", edgeST);
}
}
return dfaST;
}
}

View File

@ -0,0 +1,100 @@
// $ANTLR 2.7.7 (2006-01-29): antlr.g -> ANTLRTokenTypes.txt$
ANTLR // output token vocab name
OPTIONS="options"=4
TOKENS="tokens"=5
PARSER="parser"=6
LEXER=7
RULE=8
BLOCK=9
OPTIONAL=10
CLOSURE=11
POSITIVE_CLOSURE=12
SYNPRED=13
RANGE=14
CHAR_RANGE=15
EPSILON=16
ALT=17
EOR=18
EOB=19
EOA=20
ID=21
ARG=22
ARGLIST=23
RET=24
LEXER_GRAMMAR=25
PARSER_GRAMMAR=26
TREE_GRAMMAR=27
COMBINED_GRAMMAR=28
INITACTION=29
FORCED_ACTION=30
LABEL=31
TEMPLATE=32
SCOPE="scope"=33
IMPORT="import"=34
GATED_SEMPRED=35
SYN_SEMPRED=36
BACKTRACK_SEMPRED=37
FRAGMENT="fragment"=38
DOT=39
ACTION=40
DOC_COMMENT=41
SEMI=42
LITERAL_lexer="lexer"=43
LITERAL_tree="tree"=44
LITERAL_grammar="grammar"=45
AMPERSAND=46
COLON=47
RCURLY=48
ASSIGN=49
STRING_LITERAL=50
CHAR_LITERAL=51
INT=52
STAR=53
COMMA=54
TOKEN_REF=55
LITERAL_protected="protected"=56
LITERAL_public="public"=57
LITERAL_private="private"=58
BANG=59
ARG_ACTION=60
LITERAL_returns="returns"=61
LITERAL_throws="throws"=62
LPAREN=63
OR=64
RPAREN=65
LITERAL_catch="catch"=66
LITERAL_finally="finally"=67
PLUS_ASSIGN=68
SEMPRED=69
IMPLIES=70
ROOT=71
WILDCARD=72
RULE_REF=73
NOT=74
TREE_BEGIN=75
QUESTION=76
PLUS=77
OPEN_ELEMENT_OPTION=78
CLOSE_ELEMENT_OPTION=79
REWRITE=80
ETC=81
DOLLAR=82
DOUBLE_QUOTE_STRING_LITERAL=83
DOUBLE_ANGLE_STRING_LITERAL=84
WS=85
COMMENT=86
SL_COMMENT=87
ML_COMMENT=88
STRAY_BRACKET=89
ESC=90
DIGIT=91
XDIGIT=92
NESTED_ARG_ACTION=93
NESTED_ACTION=94
ACTION_CHAR_LITERAL=95
ACTION_STRING_LITERAL=96
ACTION_ESC=97
WS_LOOP=98
INTERNAL_RULE_REF=99
WS_OPT=100
SRC=101

View File

@ -0,0 +1,134 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.Tool;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
public class ActionScriptTarget extends Target {
public String getTargetCharLiteralFromANTLRCharLiteral(
CodeGenerator generator,
String literal) {
int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
return String.valueOf(c);
}
public String getTokenTypeAsTargetLabel(CodeGenerator generator,
int ttype) {
// use ints for predefined types;
// <invalid> <EOR> <DOWN> <UP>
if (ttype >= 0 && ttype <= 3) {
return String.valueOf(ttype);
}
String name = generator.grammar.getTokenDisplayName(ttype);
// If name is a literal, return the token type instead
if (name.charAt(0) == '\'') {
return String.valueOf(ttype);
}
return name;
}
/**
* ActionScript doesn't support Unicode String literals that are considered "illegal"
* or are in the surrogate pair ranges. For example "/uffff" will not encode properly
* nor will "/ud800". To keep things as compact as possible we use the following encoding
* if the int is below 255, we encode as hex literal
* If the int is between 255 and 0x7fff we use a single unicode literal with the value
* If the int is above 0x7fff, we use a unicode literal of 0x80hh, where hh is the high-order
* bits followed by \xll where ll is the lower order bits of a 16-bit number.
*
* Ideally this should be improved at a future date. The most optimal way to encode this
* may be a compressed AMF encoding that is embedded using an Embed tag in ActionScript.
*
* @param v
* @return
*/
public String encodeIntAsCharEscape(int v) {
// encode as hex
if ( v<=255 ) {
return "\\x"+ Integer.toHexString(v|0x100).substring(1,3);
}
if (v <= 0x7fff) {
String hex = Integer.toHexString(v|0x10000).substring(1,5);
return "\\u"+hex;
}
if (v > 0xffff) {
System.err.println("Warning: character literal out of range for ActionScript target " + v);
return "";
}
StringBuffer buf = new StringBuffer("\\u80");
buf.append(Integer.toHexString((v >> 8) | 0x100).substring(1, 3)); // high - order bits
buf.append("\\x");
buf.append(Integer.toHexString((v & 0xff) | 0x100).substring(1, 3)); // low -order bits
return buf.toString();
}
/** Convert long to two 32-bit numbers separted by a comma.
* ActionScript does not support 64-bit numbers, so we need to break
* the number into two 32-bit literals to give to the Bit. A number like
* 0xHHHHHHHHLLLLLLLL is broken into the following string:
* "0xLLLLLLLL, 0xHHHHHHHH"
* Note that the low order bits are first, followed by the high order bits.
* This is to match how the BitSet constructor works, where the bits are
* passed in in 32-bit chunks with low-order bits coming first.
*/
public String getTarget64BitStringFromValue(long word) {
StringBuffer buf = new StringBuffer(22); // enough for the two "0x", "," and " "
buf.append("0x");
writeHexWithPadding(buf, Integer.toHexString((int)(word & 0x00000000ffffffffL)));
buf.append(", 0x");
writeHexWithPadding(buf, Integer.toHexString((int)(word >> 32)));
return buf.toString();
}
private void writeHexWithPadding(StringBuffer buf, String digits) {
digits = digits.toUpperCase();
int padding = 8 - digits.length();
// pad left with zeros
for (int i=1; i<=padding; i++) {
buf.append('0');
}
buf.append(digits);
}
protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate recognizerST,
StringTemplate cyclicDFAST) {
return recognizerST;
}
}

View File

@ -0,0 +1,801 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
lexer grammar ActionTranslator;
options {
filter=true; // try all non-fragment rules in order specified
// output=template; TODO: can we make tokens return templates somehow?
}
@header {
package org.antlr.codegen;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.runtime.*;
import org.antlr.tool.*;
}
@members {
public List chunks = new ArrayList();
Rule enclosingRule;
int outerAltNum;
Grammar grammar;
CodeGenerator generator;
antlr.Token actionToken;
public ActionTranslator(CodeGenerator generator,
String ruleName,
GrammarAST actionAST)
{
this(new ANTLRStringStream(actionAST.token.getText()));
this.generator = generator;
this.grammar = generator.grammar;
this.enclosingRule = grammar.getLocallyDefinedRule(ruleName);
this.actionToken = actionAST.token;
this.outerAltNum = actionAST.outerAltNum;
}
public ActionTranslator(CodeGenerator generator,
String ruleName,
antlr.Token actionToken,
int outerAltNum)
{
this(new ANTLRStringStream(actionToken.getText()));
this.generator = generator;
grammar = generator.grammar;
this.enclosingRule = grammar.getRule(ruleName);
this.actionToken = actionToken;
this.outerAltNum = outerAltNum;
}
/** Return a list of strings and StringTemplate objects that
* represent the translated action.
*/
public List translateToChunks() {
// System.out.println("###\naction="+action);
Token t;
do {
t = nextToken();
} while ( t.getType()!= Token.EOF );
return chunks;
}
public String translate() {
List theChunks = translateToChunks();
//System.out.println("chunks="+a.chunks);
StringBuffer buf = new StringBuffer();
for (int i = 0; i < theChunks.size(); i++) {
Object o = (Object) theChunks.get(i);
buf.append(o);
}
//System.out.println("translated: "+buf.toString());
return buf.toString();
}
public List translateAction(String action) {
String rname = null;
if ( enclosingRule!=null ) {
rname = enclosingRule.name;
}
ActionTranslator translator =
new ActionTranslator(generator,
rname,
new antlr.CommonToken(ANTLRParser.ACTION,action),outerAltNum);
return translator.translateToChunks();
}
public boolean isTokenRefInAlt(String id) {
return enclosingRule.getTokenRefsInAlt(id, outerAltNum)!=null;
}
public boolean isRuleRefInAlt(String id) {
return enclosingRule.getRuleRefsInAlt(id, outerAltNum)!=null;
}
public Grammar.LabelElementPair getElementLabel(String id) {
return enclosingRule.getLabel(id);
}
public void checkElementRefUniqueness(String ref, boolean isToken) {
List refs = null;
if ( isToken ) {
refs = enclosingRule.getTokenRefsInAlt(ref, outerAltNum);
}
else {
refs = enclosingRule.getRuleRefsInAlt(ref, outerAltNum);
}
if ( refs!=null && refs.size()>1 ) {
ErrorManager.grammarError(ErrorManager.MSG_NONUNIQUE_REF,
grammar,
actionToken,
ref);
}
}
/** For \$rulelabel.name, return the Attribute found for name. It
* will be a predefined property or a return value.
*/
public Attribute getRuleLabelAttribute(String ruleName, String attrName) {
Rule r = grammar.getRule(ruleName);
AttributeScope scope = r.getLocalAttributeScope(attrName);
if ( scope!=null && !scope.isParameterScope ) {
return scope.getAttribute(attrName);
}
return null;
}
AttributeScope resolveDynamicScope(String scopeName) {
if ( grammar.getGlobalScope(scopeName)!=null ) {
return grammar.getGlobalScope(scopeName);
}
Rule scopeRule = grammar.getRule(scopeName);
if ( scopeRule!=null ) {
return scopeRule.ruleScope;
}
return null; // not a valid dynamic scope
}
protected StringTemplate template(String name) {
StringTemplate st = generator.getTemplates().getInstanceOf(name);
chunks.add(st);
return st;
}
}
/** $x.y x is enclosing rule, y is a return value, parameter, or
* predefined property.
*
* r[int i] returns [int j]
* : {$r.i, $r.j, $r.start, $r.stop, $r.st, $r.tree}
* ;
*/
SET_ENCLOSING_RULE_SCOPE_ATTR
: '$' x=ID '.' y=ID WS? '=' expr=ATTR_VALUE_EXPR ';'
{enclosingRule!=null &&
$x.text.equals(enclosingRule.name) &&
enclosingRule.getLocalAttributeScope($y.text)!=null}?
//{System.out.println("found \$rule.attr");}
{
StringTemplate st = null;
AttributeScope scope = enclosingRule.getLocalAttributeScope($y.text);
if ( scope.isPredefinedRuleScope ) {
if ( $y.text.equals("st") || $y.text.equals("tree") ) {
st = template("ruleSetPropertyRef_"+$y.text);
grammar.referenceRuleLabelPredefinedAttribute($x.text);
st.setAttribute("scope", $x.text);
st.setAttribute("attr", $y.text);
st.setAttribute("expr", translateAction($expr.text));
} else {
ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
grammar,
actionToken,
$x.text,
$y.text);
}
}
else if ( scope.isPredefinedLexerRuleScope ) {
// this is a better message to emit than the previous one...
ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
grammar,
actionToken,
$x.text,
$y.text);
}
else if ( scope.isParameterScope ) {
st = template("parameterSetAttributeRef");
st.setAttribute("attr", scope.getAttribute($y.text));
st.setAttribute("expr", translateAction($expr.text));
}
else { // must be return value
st = template("returnSetAttributeRef");
st.setAttribute("ruleDescriptor", enclosingRule);
st.setAttribute("attr", scope.getAttribute($y.text));
st.setAttribute("expr", translateAction($expr.text));
}
}
;
ENCLOSING_RULE_SCOPE_ATTR
: '$' x=ID '.' y=ID {enclosingRule!=null &&
$x.text.equals(enclosingRule.name) &&
enclosingRule.getLocalAttributeScope($y.text)!=null}?
//{System.out.println("found \$rule.attr");}
{
if ( isRuleRefInAlt($x.text) ) {
ErrorManager.grammarError(ErrorManager.MSG_RULE_REF_AMBIG_WITH_RULE_IN_ALT,
grammar,
actionToken,
$x.text);
}
StringTemplate st = null;
AttributeScope scope = enclosingRule.getLocalAttributeScope($y.text);
if ( scope.isPredefinedRuleScope ) {
st = template("rulePropertyRef_"+$y.text);
grammar.referenceRuleLabelPredefinedAttribute($x.text);
st.setAttribute("scope", $x.text);
st.setAttribute("attr", $y.text);
}
else if ( scope.isPredefinedLexerRuleScope ) {
// perhaps not the most precise error message to use, but...
ErrorManager.grammarError(ErrorManager.MSG_RULE_HAS_NO_ARGS,
grammar,
actionToken,
$x.text);
}
else if ( scope.isParameterScope ) {
st = template("parameterAttributeRef");
st.setAttribute("attr", scope.getAttribute($y.text));
}
else { // must be return value
st = template("returnAttributeRef");
st.setAttribute("ruleDescriptor", enclosingRule);
st.setAttribute("attr", scope.getAttribute($y.text));
}
}
;
/** Setting $tokenlabel.attr or $tokenref.attr where attr is predefined property of a token is an error. */
SET_TOKEN_SCOPE_ATTR
: '$' x=ID '.' y=ID WS? '='
{enclosingRule!=null && input.LA(1)!='=' &&
(enclosingRule.getTokenLabel($x.text)!=null||
isTokenRefInAlt($x.text)) &&
AttributeScope.tokenScope.getAttribute($y.text)!=null}?
//{System.out.println("found \$tokenlabel.attr or \$tokenref.attr");}
{
ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
grammar,
actionToken,
$x.text,
$y.text);
}
;
/** $tokenlabel.attr or $tokenref.attr where attr is predefined property of a token.
* If in lexer grammar, only translate for strings and tokens (rule refs)
*/
TOKEN_SCOPE_ATTR
: '$' x=ID '.' y=ID {enclosingRule!=null &&
(enclosingRule.getTokenLabel($x.text)!=null||
isTokenRefInAlt($x.text)) &&
AttributeScope.tokenScope.getAttribute($y.text)!=null &&
(grammar.type!=Grammar.LEXER ||
getElementLabel($x.text).elementRef.token.getType()==ANTLRParser.TOKEN_REF ||
getElementLabel($x.text).elementRef.token.getType()==ANTLRParser.STRING_LITERAL)}?
// {System.out.println("found \$tokenlabel.attr or \$tokenref.attr");}
{
String label = $x.text;
if ( enclosingRule.getTokenLabel($x.text)==null ) {
// \$tokenref.attr gotta get old label or compute new one
checkElementRefUniqueness($x.text, true);
label = enclosingRule.getElementLabel($x.text, outerAltNum, generator);
if ( label==null ) {
ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
grammar,
actionToken,
"\$"+$x.text+"."+$y.text);
label = $x.text;
}
}
StringTemplate st = template("tokenLabelPropertyRef_"+$y.text);
st.setAttribute("scope", label);
st.setAttribute("attr", AttributeScope.tokenScope.getAttribute($y.text));
}
;
/** Setting $rulelabel.attr or $ruleref.attr where attr is a predefined property is an error
* This must also fail, if we try to access a local attribute's field, like $tree.scope = localObject
* That must be handled by LOCAL_ATTR below. ANTLR only concerns itself with the top-level scope
* attributes declared in scope {} or parameters, return values and the like.
*/
SET_RULE_SCOPE_ATTR
@init {
Grammar.LabelElementPair pair=null;
String refdRuleName=null;
}
: '$' x=ID '.' y=ID WS? '=' {enclosingRule!=null && input.LA(1)!='='}?
{
pair = enclosingRule.getRuleLabel($x.text);
refdRuleName = $x.text;
if ( pair!=null ) {
refdRuleName = pair.referencedRuleName;
}
}
// supercomplicated because I can't exec the above action.
// This asserts that if it's a label or a ref to a rule proceed but only if the attribute
// is valid for that rule's scope
{(enclosingRule.getRuleLabel($x.text)!=null || isRuleRefInAlt($x.text)) &&
getRuleLabelAttribute(enclosingRule.getRuleLabel($x.text)!=null?enclosingRule.getRuleLabel($x.text).referencedRuleName:$x.text,$y.text)!=null}?
//{System.out.println("found set \$rulelabel.attr or \$ruleref.attr: "+$x.text+"."+$y.text);}
{
ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
grammar,
actionToken,
$x.text,
$y.text);
}
;
/** $rulelabel.attr or $ruleref.attr where attr is a predefined property*/
RULE_SCOPE_ATTR
@init {
Grammar.LabelElementPair pair=null;
String refdRuleName=null;
}
: '$' x=ID '.' y=ID {enclosingRule!=null}?
{
pair = enclosingRule.getRuleLabel($x.text);
refdRuleName = $x.text;
if ( pair!=null ) {
refdRuleName = pair.referencedRuleName;
}
}
// supercomplicated because I can't exec the above action.
// This asserts that if it's a label or a ref to a rule proceed but only if the attribute
// is valid for that rule's scope
{(enclosingRule.getRuleLabel($x.text)!=null || isRuleRefInAlt($x.text)) &&
getRuleLabelAttribute(enclosingRule.getRuleLabel($x.text)!=null?enclosingRule.getRuleLabel($x.text).referencedRuleName:$x.text,$y.text)!=null}?
//{System.out.println("found \$rulelabel.attr or \$ruleref.attr: "+$x.text+"."+$y.text);}
{
String label = $x.text;
if ( pair==null ) {
// \$ruleref.attr gotta get old label or compute new one
checkElementRefUniqueness($x.text, false);
label = enclosingRule.getElementLabel($x.text, outerAltNum, generator);
if ( label==null ) {
ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
grammar,
actionToken,
"\$"+$x.text+"."+$y.text);
label = $x.text;
}
}
StringTemplate st;
Rule refdRule = grammar.getRule(refdRuleName);
AttributeScope scope = refdRule.getLocalAttributeScope($y.text);
if ( scope.isPredefinedRuleScope ) {
st = template("ruleLabelPropertyRef_"+$y.text);
grammar.referenceRuleLabelPredefinedAttribute(refdRuleName);
st.setAttribute("scope", label);
st.setAttribute("attr", $y.text);
}
else if ( scope.isPredefinedLexerRuleScope ) {
st = template("lexerRuleLabelPropertyRef_"+$y.text);
grammar.referenceRuleLabelPredefinedAttribute(refdRuleName);
st.setAttribute("scope", label);
st.setAttribute("attr", $y.text);
}
else if ( scope.isParameterScope ) {
// TODO: error!
}
else {
st = template("ruleLabelRef");
st.setAttribute("referencedRule", refdRule);
st.setAttribute("scope", label);
st.setAttribute("attr", scope.getAttribute($y.text));
}
}
;
/** $label either a token label or token/rule list label like label+=expr */
LABEL_REF
: '$' ID {enclosingRule!=null &&
getElementLabel($ID.text)!=null &&
enclosingRule.getRuleLabel($ID.text)==null}?
// {System.out.println("found \$label");}
{
StringTemplate st;
Grammar.LabelElementPair pair = getElementLabel($ID.text);
if ( pair.type==Grammar.TOKEN_LABEL ||
pair.type==Grammar.CHAR_LABEL )
{
st = template("tokenLabelRef");
}
else {
st = template("listLabelRef");
}
st.setAttribute("label", $ID.text);
}
;
/** $tokenref in a non-lexer grammar */
ISOLATED_TOKEN_REF
: '$' ID {grammar.type!=Grammar.LEXER && enclosingRule!=null && isTokenRefInAlt($ID.text)}?
//{System.out.println("found \$tokenref");}
{
String label = enclosingRule.getElementLabel($ID.text, outerAltNum, generator);
checkElementRefUniqueness($ID.text, true);
if ( label==null ) {
ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
grammar,
actionToken,
$ID.text);
}
else {
StringTemplate st = template("tokenLabelRef");
st.setAttribute("label", label);
}
}
;
/** $lexerruleref from within the lexer */
ISOLATED_LEXER_RULE_REF
: '$' ID {grammar.type==Grammar.LEXER &&
enclosingRule!=null &&
isRuleRefInAlt($ID.text)}?
//{System.out.println("found \$lexerruleref");}
{
String label = enclosingRule.getElementLabel($ID.text, outerAltNum, generator);
checkElementRefUniqueness($ID.text, false);
if ( label==null ) {
ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
grammar,
actionToken,
$ID.text);
}
else {
StringTemplate st = template("lexerRuleLabel");
st.setAttribute("label", label);
}
}
;
/** $y return value, parameter, predefined rule property, or token/rule
* reference within enclosing rule's outermost alt.
* y must be a "local" reference; i.e., it must be referring to
* something defined within the enclosing rule.
*
* r[int i] returns [int j]
* : {$i, $j, $start, $stop, $st, $tree}
* ;
*
* TODO: this might get the dynamic scope's elements too.!!!!!!!!!
*/
SET_LOCAL_ATTR
: '$' ID WS? '=' expr=ATTR_VALUE_EXPR ';' {enclosingRule!=null
&& enclosingRule.getLocalAttributeScope($ID.text)!=null
&& !enclosingRule.getLocalAttributeScope($ID.text).isPredefinedLexerRuleScope}?
//{System.out.println("found set \$localattr");}
{
StringTemplate st;
AttributeScope scope = enclosingRule.getLocalAttributeScope($ID.text);
if ( scope.isPredefinedRuleScope ) {
if ($ID.text.equals("tree") || $ID.text.equals("st")) {
st = template("ruleSetPropertyRef_"+$ID.text);
grammar.referenceRuleLabelPredefinedAttribute(enclosingRule.name);
st.setAttribute("scope", enclosingRule.name);
st.setAttribute("attr", $ID.text);
st.setAttribute("expr", translateAction($expr.text));
} else {
ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
grammar,
actionToken,
$ID.text,
"");
}
}
else if ( scope.isParameterScope ) {
st = template("parameterSetAttributeRef");
st.setAttribute("attr", scope.getAttribute($ID.text));
st.setAttribute("expr", translateAction($expr.text));
}
else {
st = template("returnSetAttributeRef");
st.setAttribute("ruleDescriptor", enclosingRule);
st.setAttribute("attr", scope.getAttribute($ID.text));
st.setAttribute("expr", translateAction($expr.text));
}
}
;
LOCAL_ATTR
: '$' ID {enclosingRule!=null && enclosingRule.getLocalAttributeScope($ID.text)!=null}?
//{System.out.println("found \$localattr");}
{
StringTemplate st;
AttributeScope scope = enclosingRule.getLocalAttributeScope($ID.text);
if ( scope.isPredefinedRuleScope ) {
st = template("rulePropertyRef_"+$ID.text);
grammar.referenceRuleLabelPredefinedAttribute(enclosingRule.name);
st.setAttribute("scope", enclosingRule.name);
st.setAttribute("attr", $ID.text);
}
else if ( scope.isPredefinedLexerRuleScope ) {
st = template("lexerRulePropertyRef_"+$ID.text);
st.setAttribute("scope", enclosingRule.name);
st.setAttribute("attr", $ID.text);
}
else if ( scope.isParameterScope ) {
st = template("parameterAttributeRef");
st.setAttribute("attr", scope.getAttribute($ID.text));
}
else {
st = template("returnAttributeRef");
st.setAttribute("ruleDescriptor", enclosingRule);
st.setAttribute("attr", scope.getAttribute($ID.text));
}
}
;
/** $x::y the only way to access the attributes within a dynamic scope
* regardless of whether or not you are in the defining rule.
*
* scope Symbols { List names; }
* r
* scope {int i;}
* scope Symbols;
* : {$r::i=3;} s {$Symbols::names;}
* ;
* s : {$r::i; $Symbols::names;}
* ;
*/
SET_DYNAMIC_SCOPE_ATTR
: '$' x=ID '::' y=ID WS? '=' expr=ATTR_VALUE_EXPR ';'
{resolveDynamicScope($x.text)!=null &&
resolveDynamicScope($x.text).getAttribute($y.text)!=null}?
//{System.out.println("found set \$scope::attr "+ $x.text + "::" + $y.text + " to " + $expr.text);}
{
AttributeScope scope = resolveDynamicScope($x.text);
if ( scope!=null ) {
StringTemplate st = template("scopeSetAttributeRef");
st.setAttribute("scope", $x.text);
st.setAttribute("attr", scope.getAttribute($y.text));
st.setAttribute("expr", translateAction($expr.text));
}
else {
// error: invalid dynamic attribute
}
}
;
DYNAMIC_SCOPE_ATTR
: '$' x=ID '::' y=ID
{resolveDynamicScope($x.text)!=null &&
resolveDynamicScope($x.text).getAttribute($y.text)!=null}?
//{System.out.println("found \$scope::attr "+ $x.text + "::" + $y.text);}
{
AttributeScope scope = resolveDynamicScope($x.text);
if ( scope!=null ) {
StringTemplate st = template("scopeAttributeRef");
st.setAttribute("scope", $x.text);
st.setAttribute("attr", scope.getAttribute($y.text));
}
else {
// error: invalid dynamic attribute
}
}
;
ERROR_SCOPED_XY
: '$' x=ID '::' y=ID
{
chunks.add(getText());
generator.issueInvalidScopeError($x.text,$y.text,
enclosingRule,actionToken,
outerAltNum);
}
;
/** To access deeper (than top of stack) scopes, use the notation:
*
* $x[-1]::y previous (just under top of stack)
* $x[-i]::y top of stack - i where the '-' MUST BE PRESENT;
* i.e., i cannot simply be negative without the '-' sign!
* $x[i]::y absolute index i (0..size-1)
* $x[0]::y is the absolute 0 indexed element (bottom of the stack)
*/
DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR
: '$' x=ID '[' '-' expr=SCOPE_INDEX_EXPR ']' '::' y=ID
// {System.out.println("found \$scope[-...]::attr");}
{
StringTemplate st = template("scopeAttributeRef");
st.setAttribute("scope", $x.text);
st.setAttribute("attr", resolveDynamicScope($x.text).getAttribute($y.text));
st.setAttribute("negIndex", $expr.text);
}
;
DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR
: '$' x=ID '[' expr=SCOPE_INDEX_EXPR ']' '::' y=ID
// {System.out.println("found \$scope[...]::attr");}
{
StringTemplate st = template("scopeAttributeRef");
st.setAttribute("scope", $x.text);
st.setAttribute("attr", resolveDynamicScope($x.text).getAttribute($y.text));
st.setAttribute("index", $expr.text);
}
;
fragment
SCOPE_INDEX_EXPR
: (~']')+
;
/** $r y is a rule's dynamic scope or a global shared scope.
* Isolated $rulename is not allowed unless it has a dynamic scope *and*
* there is no reference to rulename in the enclosing alternative,
* which would be ambiguous. See TestAttributes.testAmbiguousRuleRef()
*/
ISOLATED_DYNAMIC_SCOPE
: '$' ID {resolveDynamicScope($ID.text)!=null}?
// {System.out.println("found isolated \$scope where scope is a dynamic scope");}
{
StringTemplate st = template("isolatedDynamicScopeRef");
st.setAttribute("scope", $ID.text);
}
;
// antlr.g then codegen.g does these first two currently.
// don't want to duplicate that code.
/** %foo(a={},b={},...) ctor */
TEMPLATE_INSTANCE
: '%' ID '(' ( WS? ARG (',' WS? ARG)* WS? )? ')'
// {System.out.println("found \%foo(args)");}
{
String action = getText().substring(1,getText().length());
String ruleName = "<outside-of-rule>";
if ( enclosingRule!=null ) {
ruleName = enclosingRule.name;
}
StringTemplate st =
generator.translateTemplateConstructor(ruleName,
outerAltNum,
actionToken,
action);
if ( st!=null ) {
chunks.add(st);
}
}
;
/** %({name-expr})(a={},...) indirect template ctor reference */
INDIRECT_TEMPLATE_INSTANCE
: '%' '(' ACTION ')' '(' ( WS? ARG (',' WS? ARG)* WS? )? ')'
// {System.out.println("found \%({...})(args)");}
{
String action = getText().substring(1,getText().length());
StringTemplate st =
generator.translateTemplateConstructor(enclosingRule.name,
outerAltNum,
actionToken,
action);
chunks.add(st);
}
;
fragment
ARG : ID '=' ACTION
;
/** %{expr}.y = z; template attribute y of StringTemplate-typed expr to z */
SET_EXPR_ATTRIBUTE
: '%' a=ACTION '.' ID WS? '=' expr=ATTR_VALUE_EXPR ';'
// {System.out.println("found \%{expr}.y = z;");}
{
StringTemplate st = template("actionSetAttribute");
String action = $a.text;
action = action.substring(1,action.length()-1); // stuff inside {...}
st.setAttribute("st", translateAction(action));
st.setAttribute("attrName", $ID.text);
st.setAttribute("expr", translateAction($expr.text));
}
;
/* %x.y = z; set template attribute y of x (always set never get attr)
* to z [languages like python without ';' must still use the
* ';' which the code generator is free to remove during code gen]
*/
SET_ATTRIBUTE
: '%' x=ID '.' y=ID WS? '=' expr=ATTR_VALUE_EXPR ';'
// {System.out.println("found \%x.y = z;");}
{
StringTemplate st = template("actionSetAttribute");
st.setAttribute("st", $x.text);
st.setAttribute("attrName", $y.text);
st.setAttribute("expr", translateAction($expr.text));
}
;
/** Don't allow an = as first char to prevent $x == 3; kind of stuff. */
fragment
ATTR_VALUE_EXPR
: ~'=' (~';')*
;
/** %{string-expr} anonymous template from string expr */
TEMPLATE_EXPR
: '%' a=ACTION
// {System.out.println("found \%{expr}");}
{
StringTemplate st = template("actionStringConstructor");
String action = $a.text;
action = action.substring(1,action.length()-1); // stuff inside {...}
st.setAttribute("stringExpr", translateAction(action));
}
;
fragment
ACTION
: '{' (options {greedy=false;}:.)* '}'
;
ESC : '\\' '$' {chunks.add("\$");}
| '\\' '%' {chunks.add("\%");}
| '\\' ~('$'|'%') {chunks.add(getText());}
;
ERROR_XY
: '$' x=ID '.' y=ID
{
chunks.add(getText());
generator.issueInvalidAttributeError($x.text,$y.text,
enclosingRule,actionToken,
outerAltNum);
}
;
ERROR_X
: '$' x=ID
{
chunks.add(getText());
generator.issueInvalidAttributeError($x.text,
enclosingRule,actionToken,
outerAltNum);
}
;
UNKNOWN_SYNTAX
: '$'
{
chunks.add(getText());
// shouldn't need an error here. Just accept \$ if it doesn't look like anything
}
| '%' (ID|'.'|'('|')'|','|'{'|'}'|'"')*
{
chunks.add(getText());
ErrorManager.grammarError(ErrorManager.MSG_INVALID_TEMPLATE_ACTION,
grammar,
actionToken,
getText());
}
;
TEXT: ~('$'|'%'|'\\')+ {chunks.add(getText());}
;
fragment
ID : ('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9')*
;
fragment
INT : '0'..'9'+
;
fragment
WS : (' '|'\t'|'\n'|'\r')+
;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,34 @@
LOCAL_ATTR=17
SET_DYNAMIC_SCOPE_ATTR=18
ISOLATED_DYNAMIC_SCOPE=24
WS=5
UNKNOWN_SYNTAX=35
DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR=23
SCOPE_INDEX_EXPR=21
DYNAMIC_SCOPE_ATTR=19
ISOLATED_TOKEN_REF=14
SET_ATTRIBUTE=30
SET_EXPR_ATTRIBUTE=29
ACTION=27
ERROR_X=34
TEMPLATE_INSTANCE=26
TOKEN_SCOPE_ATTR=10
ISOLATED_LEXER_RULE_REF=15
ESC=32
SET_ENCLOSING_RULE_SCOPE_ATTR=7
ATTR_VALUE_EXPR=6
RULE_SCOPE_ATTR=12
LABEL_REF=13
INT=37
ARG=25
SET_LOCAL_ATTR=16
TEXT=36
DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR=22
SET_TOKEN_SCOPE_ATTR=9
ERROR_SCOPED_XY=20
SET_RULE_SCOPE_ATTR=11
ENCLOSING_RULE_SCOPE_ATTR=8
ERROR_XY=33
TEMPLATE_EXPR=31
INDIRECT_TEMPLATE_INSTANCE=28
ID=4

View File

@ -0,0 +1,140 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.stringtemplate.StringTemplateGroup;
import org.antlr.tool.Grammar;
import org.antlr.Tool;
import java.io.IOException;
public class CPPTarget extends Target {
public String escapeChar( int c ) {
// System.out.println("CPPTarget.escapeChar("+c+")");
switch (c) {
case '\n' : return "\\n";
case '\t' : return "\\t";
case '\r' : return "\\r";
case '\\' : return "\\\\";
case '\'' : return "\\'";
case '"' : return "\\\"";
default :
if ( c < ' ' || c > 126 )
{
if (c > 255)
{
String s = Integer.toString(c,16);
// put leading zeroes in front of the thing..
while( s.length() < 4 )
s = '0' + s;
return "\\u" + s;
}
else {
return "\\" + Integer.toString(c,8);
}
}
else {
return String.valueOf((char)c);
}
}
}
/** Converts a String into a representation that can be use as a literal
* when surrounded by double-quotes.
*
* Used for escaping semantic predicate strings for exceptions.
*
* @param s The String to be changed into a literal
*/
public String escapeString(String s)
{
StringBuffer retval = new StringBuffer();
for (int i = 0; i < s.length(); i++) {
retval.append(escapeChar(s.charAt(i)));
}
return retval.toString();
}
protected void genRecognizerHeaderFile(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate headerFileST,
String extName)
throws IOException
{
StringTemplateGroup templates = generator.getTemplates();
generator.write(headerFileST, grammar.name+extName);
}
/** Convert from an ANTLR char literal found in a grammar file to
* an equivalent char literal in the target language. For Java, this
* is the identify translation; i.e., '\n' -> '\n'. Most languages
* will be able to use this 1-to-1 mapping. Expect single quotes
* around the incoming literal.
* Depending on the charvocabulary the charliteral should be prefixed with a 'L'
*/
public String getTargetCharLiteralFromANTLRCharLiteral( CodeGenerator codegen, String literal) {
int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
String prefix = "'";
if( codegen.grammar.getMaxCharValue() > 255 )
prefix = "L'";
else if( (c & 0x80) != 0 ) // if in char mode prevent sign extensions
return ""+c;
return prefix+escapeChar(c)+"'";
}
/** Convert from an ANTLR string literal found in a grammar file to
* an equivalent string literal in the target language. For Java, this
* is the identify translation; i.e., "\"\n" -> "\"\n". Most languages
* will be able to use this 1-to-1 mapping. Expect double quotes
* around the incoming literal.
* Depending on the charvocabulary the string should be prefixed with a 'L'
*/
public String getTargetStringLiteralFromANTLRStringLiteral( CodeGenerator codegen, String literal) {
StringBuffer buf = Grammar.getUnescapedStringFromGrammarStringLiteral(literal);
String prefix = "\"";
if( codegen.grammar.getMaxCharValue() > 255 )
prefix = "L\"";
return prefix+escapeString(buf.toString())+"\"";
}
/** Character constants get truncated to this value.
* TODO: This should be derived from the charVocabulary. Depending on it
* being 255 or 0xFFFF the templates should generate normal character
* constants or multibyte ones.
*/
public int getMaxCharValue( CodeGenerator codegen ) {
int maxval = 255; // codegen.grammar.get????();
if ( maxval <= 255 )
return 255;
else
return maxval;
}
}

View File

@ -0,0 +1,57 @@
/*
[The "BSD licence"]
Copyright (c) 2006 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.Tool;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
public class CSharp2Target extends Target
{
protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate recognizerST,
StringTemplate cyclicDFAST)
{
return recognizerST;
}
public String encodeIntAsCharEscape(int v)
{
if (v <= 127)
{
String hex1 = Integer.toHexString(v | 0x10000).substring(3, 5);
return "\\x" + hex1;
}
String hex = Integer.toHexString(v | 0x10000).substring(1, 5);
return "\\u" + hex;
}
}

View File

@ -0,0 +1,57 @@
/*
[The "BSD licence"]
Copyright (c) 2006 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.Tool;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
public class CSharpTarget extends Target
{
protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate recognizerST,
StringTemplate cyclicDFAST)
{
return recognizerST;
}
public String encodeIntAsCharEscape(int v)
{
if (v <= 127)
{
String hex1 = Integer.toHexString(v | 0x10000).substring(3, 5);
return "\\x" + hex1;
}
String hex = Integer.toHexString(v | 0x10000).substring(1, 5);
return "\\u" + hex;
}
}

View File

@ -0,0 +1,247 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.Tool;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
import java.io.IOException;
import java.util.ArrayList;
public class CTarget extends Target {
ArrayList strings = new ArrayList();
protected void genRecognizerFile(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate outputFileST)
throws IOException
{
// Before we write this, and cause it to generate its string,
// we need to add all the string literals that we are going to match
//
outputFileST.setAttribute("literals", strings);
String fileName = generator.getRecognizerFileName(grammar.name, grammar.type);
System.out.println("Generating " + fileName);
generator.write(outputFileST, fileName);
}
protected void genRecognizerHeaderFile(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate headerFileST,
String extName)
throws IOException
{
// Pick up the file name we are generating. This method will return a
// a file suffixed with .c, so we must substring and add the extName
// to it as we cannot assign into strings in Java.
///
String fileName = generator.getRecognizerFileName(grammar.name, grammar.type);
fileName = fileName.substring(0, fileName.length()-2) + extName;
System.out.println("Generating " + fileName);
generator.write(headerFileST, fileName);
}
protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate recognizerST,
StringTemplate cyclicDFAST)
{
return recognizerST;
}
/** Is scope in @scope::name {action} valid for this kind of grammar?
* Targets like C++ may want to allow new scopes like headerfile or
* some such. The action names themselves are not policed at the
* moment so targets can add template actions w/o having to recompile
* ANTLR.
*/
public boolean isValidActionScope(int grammarType, String scope) {
switch (grammarType) {
case Grammar.LEXER :
if ( scope.equals("lexer") ) {return true;}
if ( scope.equals("header") ) {return true;}
if ( scope.equals("includes") ) {return true;}
if ( scope.equals("preincludes") ) {return true;}
if ( scope.equals("overrides") ) {return true;}
break;
case Grammar.PARSER :
if ( scope.equals("parser") ) {return true;}
if ( scope.equals("header") ) {return true;}
if ( scope.equals("includes") ) {return true;}
if ( scope.equals("preincludes") ) {return true;}
if ( scope.equals("overrides") ) {return true;}
break;
case Grammar.COMBINED :
if ( scope.equals("parser") ) {return true;}
if ( scope.equals("lexer") ) {return true;}
if ( scope.equals("header") ) {return true;}
if ( scope.equals("includes") ) {return true;}
if ( scope.equals("preincludes") ) {return true;}
if ( scope.equals("overrides") ) {return true;}
break;
case Grammar.TREE_PARSER :
if ( scope.equals("treeparser") ) {return true;}
if ( scope.equals("header") ) {return true;}
if ( scope.equals("includes") ) {return true;}
if ( scope.equals("preincludes") ) {return true;}
if ( scope.equals("overrides") ) {return true;}
break;
}
return false;
}
public String getTargetCharLiteralFromANTLRCharLiteral(
CodeGenerator generator,
String literal)
{
if (literal.startsWith("'\\u") )
{
literal = "0x" +literal.substring(3, 7);
}
else
{
int c = literal.charAt(1);
if (c < 32 || c > 127) {
literal = "0x" + Integer.toHexString(c);
}
}
return literal;
}
/** Convert from an ANTLR string literal found in a grammar file to
* an equivalent string literal in the C target.
* Because we msut support Unicode character sets and have chosen
* to have the lexer match UTF32 characters, then we must encode
* string matches to use 32 bit character arrays. Here then we
* must produce the C array and cater for the case where the
* lexer has been eoncded with a string such as "xyz\n", which looks
* slightly incogrous to me but is not incorrect.
*/
public String getTargetStringLiteralFromANTLRStringLiteral(
CodeGenerator generator,
String literal)
{
int index;
int outc;
String bytes;
StringBuffer buf = new StringBuffer();
buf.append("{ ");
// We need ot lose any escaped characters of the form \x and just
// replace them with their actual values as well as lose the surrounding
// quote marks.
//
for (int i = 1; i< literal.length()-1; i++)
{
buf.append("0x");
if (literal.charAt(i) == '\\')
{
i++; // Assume that there is a next character, this will just yield
// invalid strings if not, which is what the input would be of course - invalid
switch (literal.charAt(i))
{
case 'u':
case 'U':
buf.append(literal.substring(i+1, i+5)); // Already a hex string
i = i + 5; // Move to next string/char/escape
break;
case 'n':
case 'N':
buf.append("0A");
break;
case 'r':
case 'R':
buf.append("0D");
break;
case 't':
case 'T':
buf.append("09");
break;
case 'b':
case 'B':
buf.append("08");
break;
case 'f':
case 'F':
buf.append("0C");
break;
default:
// Anything else is what it is!
//
buf.append(Integer.toHexString((int)literal.charAt(i)).toUpperCase());
break;
}
}
else
{
buf.append(Integer.toHexString((int)literal.charAt(i)).toUpperCase());
}
buf.append(", ");
}
buf.append(" ANTLR3_STRING_TERMINATOR}");
bytes = buf.toString();
index = strings.indexOf(bytes);
if (index == -1)
{
strings.add(bytes);
index = strings.indexOf(bytes);
}
String strref = "lit_" + String.valueOf(index+1);
return strref;
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,140 @@
// $ANTLR 2.7.7 (2006-01-29): "codegen.g" -> "CodeGenTreeWalker.java"$
/*
[The "BSD licence"]
Copyright (c) 2005-2008 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.tool.*;
import org.antlr.analysis.*;
import org.antlr.misc.*;
import java.util.*;
import org.antlr.stringtemplate.*;
import antlr.TokenWithIndex;
import antlr.CommonToken;
public interface CodeGenTreeWalkerTokenTypes {
int EOF = 1;
int NULL_TREE_LOOKAHEAD = 3;
int OPTIONS = 4;
int TOKENS = 5;
int PARSER = 6;
int LEXER = 7;
int RULE = 8;
int BLOCK = 9;
int OPTIONAL = 10;
int CLOSURE = 11;
int POSITIVE_CLOSURE = 12;
int SYNPRED = 13;
int RANGE = 14;
int CHAR_RANGE = 15;
int EPSILON = 16;
int ALT = 17;
int EOR = 18;
int EOB = 19;
int EOA = 20;
int ID = 21;
int ARG = 22;
int ARGLIST = 23;
int RET = 24;
int LEXER_GRAMMAR = 25;
int PARSER_GRAMMAR = 26;
int TREE_GRAMMAR = 27;
int COMBINED_GRAMMAR = 28;
int INITACTION = 29;
int FORCED_ACTION = 30;
int LABEL = 31;
int TEMPLATE = 32;
int SCOPE = 33;
int IMPORT = 34;
int GATED_SEMPRED = 35;
int SYN_SEMPRED = 36;
int BACKTRACK_SEMPRED = 37;
int FRAGMENT = 38;
int DOT = 39;
int ACTION = 40;
int DOC_COMMENT = 41;
int SEMI = 42;
int LITERAL_lexer = 43;
int LITERAL_tree = 44;
int LITERAL_grammar = 45;
int AMPERSAND = 46;
int COLON = 47;
int RCURLY = 48;
int ASSIGN = 49;
int STRING_LITERAL = 50;
int CHAR_LITERAL = 51;
int INT = 52;
int STAR = 53;
int COMMA = 54;
int TOKEN_REF = 55;
int LITERAL_protected = 56;
int LITERAL_public = 57;
int LITERAL_private = 58;
int BANG = 59;
int ARG_ACTION = 60;
int LITERAL_returns = 61;
int LITERAL_throws = 62;
int LPAREN = 63;
int OR = 64;
int RPAREN = 65;
int LITERAL_catch = 66;
int LITERAL_finally = 67;
int PLUS_ASSIGN = 68;
int SEMPRED = 69;
int IMPLIES = 70;
int ROOT = 71;
int WILDCARD = 72;
int RULE_REF = 73;
int NOT = 74;
int TREE_BEGIN = 75;
int QUESTION = 76;
int PLUS = 77;
int OPEN_ELEMENT_OPTION = 78;
int CLOSE_ELEMENT_OPTION = 79;
int REWRITE = 80;
int ETC = 81;
int DOLLAR = 82;
int DOUBLE_QUOTE_STRING_LITERAL = 83;
int DOUBLE_ANGLE_STRING_LITERAL = 84;
int WS = 85;
int COMMENT = 86;
int SL_COMMENT = 87;
int ML_COMMENT = 88;
int STRAY_BRACKET = 89;
int ESC = 90;
int DIGIT = 91;
int XDIGIT = 92;
int NESTED_ARG_ACTION = 93;
int NESTED_ACTION = 94;
int ACTION_CHAR_LITERAL = 95;
int ACTION_STRING_LITERAL = 96;
int ACTION_ESC = 97;
int WS_LOOP = 98;
int INTERNAL_RULE_REF = 99;
int WS_OPT = 100;
int SRC = 101;
}

View File

@ -0,0 +1,100 @@
// $ANTLR 2.7.7 (2006-01-29): codegen.g -> CodeGenTreeWalkerTokenTypes.txt$
CodeGenTreeWalker // output token vocab name
OPTIONS="options"=4
TOKENS="tokens"=5
PARSER="parser"=6
LEXER=7
RULE=8
BLOCK=9
OPTIONAL=10
CLOSURE=11
POSITIVE_CLOSURE=12
SYNPRED=13
RANGE=14
CHAR_RANGE=15
EPSILON=16
ALT=17
EOR=18
EOB=19
EOA=20
ID=21
ARG=22
ARGLIST=23
RET=24
LEXER_GRAMMAR=25
PARSER_GRAMMAR=26
TREE_GRAMMAR=27
COMBINED_GRAMMAR=28
INITACTION=29
FORCED_ACTION=30
LABEL=31
TEMPLATE=32
SCOPE="scope"=33
IMPORT="import"=34
GATED_SEMPRED=35
SYN_SEMPRED=36
BACKTRACK_SEMPRED=37
FRAGMENT="fragment"=38
DOT=39
ACTION=40
DOC_COMMENT=41
SEMI=42
LITERAL_lexer="lexer"=43
LITERAL_tree="tree"=44
LITERAL_grammar="grammar"=45
AMPERSAND=46
COLON=47
RCURLY=48
ASSIGN=49
STRING_LITERAL=50
CHAR_LITERAL=51
INT=52
STAR=53
COMMA=54
TOKEN_REF=55
LITERAL_protected="protected"=56
LITERAL_public="public"=57
LITERAL_private="private"=58
BANG=59
ARG_ACTION=60
LITERAL_returns="returns"=61
LITERAL_throws="throws"=62
LPAREN=63
OR=64
RPAREN=65
LITERAL_catch="catch"=66
LITERAL_finally="finally"=67
PLUS_ASSIGN=68
SEMPRED=69
IMPLIES=70
ROOT=71
WILDCARD=72
RULE_REF=73
NOT=74
TREE_BEGIN=75
QUESTION=76
PLUS=77
OPEN_ELEMENT_OPTION=78
CLOSE_ELEMENT_OPTION=79
REWRITE=80
ETC=81
DOLLAR=82
DOUBLE_QUOTE_STRING_LITERAL=83
DOUBLE_ANGLE_STRING_LITERAL=84
WS=85
COMMENT=86
SL_COMMENT=87
ML_COMMENT=88
STRAY_BRACKET=89
ESC=90
DIGIT=91
XDIGIT=92
NESTED_ARG_ACTION=93
NESTED_ACTION=94
ACTION_CHAR_LITERAL=95
ACTION_STRING_LITERAL=96
ACTION_ESC=97
WS_LOOP=98
INTERNAL_RULE_REF=99
WS_OPT=100
SRC=101

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,47 @@
package org.antlr.codegen;
import java.util.*;
public class JavaScriptTarget extends Target {
/** Convert an int to a JavaScript Unicode character literal.
*
* The current JavaScript spec (ECMA-262) doesn't provide for octal
* notation in String literals, although some implementations support it.
* This method overrides the parent class so that characters will always
* be encoded as Unicode literals (e.g. \u0011).
*/
public String encodeIntAsCharEscape(int v) {
String hex = Integer.toHexString(v|0x10000).substring(1,5);
return "\\u"+hex;
}
/** Convert long to two 32-bit numbers separted by a comma.
* JavaScript does not support 64-bit numbers, so we need to break
* the number into two 32-bit literals to give to the Bit. A number like
* 0xHHHHHHHHLLLLLLLL is broken into the following string:
* "0xLLLLLLLL, 0xHHHHHHHH"
* Note that the low order bits are first, followed by the high order bits.
* This is to match how the BitSet constructor works, where the bits are
* passed in in 32-bit chunks with low-order bits coming first.
*
* Note: stole the following two methods from the ActionScript target.
*/
public String getTarget64BitStringFromValue(long word) {
StringBuffer buf = new StringBuffer(22); // enough for the two "0x", "," and " "
buf.append("0x");
writeHexWithPadding(buf, Integer.toHexString((int)(word & 0x00000000ffffffffL)));
buf.append(", 0x");
writeHexWithPadding(buf, Integer.toHexString((int)(word >> 32)));
return buf.toString();
}
private void writeHexWithPadding(StringBuffer buf, String digits) {
digits = digits.toUpperCase();
int padding = 8 - digits.length();
// pad left with zeros
for (int i=1; i<=padding; i++) {
buf.append('0');
}
buf.append(digits);
}
}

View File

@ -0,0 +1,44 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.Tool;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
public class JavaTarget extends Target {
protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate recognizerST,
StringTemplate cyclicDFAST)
{
return recognizerST;
}
}

View File

@ -0,0 +1,109 @@
/*
[The "BSD licence"]
Copyright (c) 2005 Terence Parr
Copyright (c) 2006 Kay Roepke (Objective-C runtime)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
import org.antlr.Tool;
import org.antlr.misc.Utils;
import java.io.IOException;
public class ObjCTarget extends Target {
protected void genRecognizerHeaderFile(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate headerFileST,
String extName)
throws IOException
{
generator.write(headerFileST, grammar.name + Grammar.grammarTypeToFileNameSuffix[grammar.type] + extName);
}
public String getTargetCharLiteralFromANTLRCharLiteral(CodeGenerator generator,
String literal)
{
if (literal.startsWith("'\\u") ) {
literal = "0x" +literal.substring(3, 7);
} else {
int c = literal.charAt(1); // TJP
if (c < 32 || c > 127) {
literal = "0x" + Integer.toHexString(c);
}
}
return literal;
}
/** Convert from an ANTLR string literal found in a grammar file to
* an equivalent string literal in the target language. For Java, this
* is the translation 'a\n"' -> "a\n\"". Expect single quotes
* around the incoming literal. Just flip the quotes and replace
* double quotes with \"
*/
public String getTargetStringLiteralFromANTLRStringLiteral(CodeGenerator generator,
String literal)
{
literal = Utils.replace(literal,"\"","\\\"");
StringBuffer buf = new StringBuffer(literal);
buf.setCharAt(0,'"');
buf.setCharAt(literal.length()-1,'"');
buf.insert(0,'@');
return buf.toString();
}
/** If we have a label, prefix it with the recognizer's name */
public String getTokenTypeAsTargetLabel(CodeGenerator generator, int ttype) {
String name = generator.grammar.getTokenDisplayName(ttype);
// If name is a literal, return the token type instead
if ( name.charAt(0)=='\'' ) {
return String.valueOf(ttype);
}
return generator.grammar.name + Grammar.grammarTypeToFileNameSuffix[generator.grammar.type] + "_" + name;
//return super.getTokenTypeAsTargetLabel(generator, ttype);
//return this.getTokenTextAndTypeAsTargetLabel(generator, null, ttype);
}
/** Target must be able to override the labels used for token types. Sometimes also depends on the token text.*/
public String getTokenTextAndTypeAsTargetLabel(CodeGenerator generator, String text, int tokenType) {
String name = generator.grammar.getTokenDisplayName(tokenType);
// If name is a literal, return the token type instead
if ( name.charAt(0)=='\'' ) {
return String.valueOf(tokenType);
}
String textEquivalent = text == null ? name : text;
if (textEquivalent.charAt(0) >= '0' && textEquivalent.charAt(0) <= '9') {
return textEquivalent;
} else {
return generator.grammar.name + Grammar.grammarTypeToFileNameSuffix[generator.grammar.type] + "_" + textEquivalent;
}
}
}

View File

@ -0,0 +1,78 @@
/*
[The "BSD licence"]
Copyright (c) 2007 Ronald Blaschke
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.analysis.Label;
import org.antlr.tool.AttributeScope;
import org.antlr.tool.Grammar;
import org.antlr.tool.RuleLabelScope;
public class Perl5Target extends Target {
public Perl5Target() {
AttributeScope.tokenScope.addAttribute("self", null);
RuleLabelScope.predefinedLexerRulePropertiesScope.addAttribute("self", null);
}
public String getTargetCharLiteralFromANTLRCharLiteral(final CodeGenerator generator,
final String literal) {
final StringBuffer buf = new StringBuffer(10);
final int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
if (c < Label.MIN_CHAR_VALUE) {
buf.append("\\x{0000}");
} else if (c < targetCharValueEscape.length &&
targetCharValueEscape[c] != null) {
buf.append(targetCharValueEscape[c]);
} else if (Character.UnicodeBlock.of((char) c) ==
Character.UnicodeBlock.BASIC_LATIN &&
!Character.isISOControl((char) c)) {
// normal char
buf.append((char) c);
} else {
// must be something unprintable...use \\uXXXX
// turn on the bit above max "\\uFFFF" value so that we pad with zeros
// then only take last 4 digits
String hex = Integer.toHexString(c | 0x10000).toUpperCase().substring(1, 5);
buf.append("\\x{");
buf.append(hex);
buf.append("}");
}
if (buf.indexOf("\\") == -1) {
// no need for interpolation, use single quotes
buf.insert(0, '\'');
buf.append('\'');
} else {
// need string interpolation
buf.insert(0, '\"');
buf.append('\"');
}
return buf.toString();
}
}

View File

@ -0,0 +1,219 @@
/*
[The "BSD licence"]
Copyright (c) 2005 Martin Traverso
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
Please excuse my obvious lack of Java experience. The code here is probably
full of WTFs - though IMHO Java is the Real WTF(TM) here...
*/
package org.antlr.codegen;
import org.antlr.tool.Grammar;
import java.util.*;
public class PythonTarget extends Target {
/** Target must be able to override the labels used for token types */
public String getTokenTypeAsTargetLabel(CodeGenerator generator,
int ttype) {
// use ints for predefined types;
// <invalid> <EOR> <DOWN> <UP>
if ( ttype >= 0 && ttype <= 3 ) {
return String.valueOf(ttype);
}
String name = generator.grammar.getTokenDisplayName(ttype);
// If name is a literal, return the token type instead
if ( name.charAt(0)=='\'' ) {
return String.valueOf(ttype);
}
return name;
}
public String getTargetCharLiteralFromANTLRCharLiteral(
CodeGenerator generator,
String literal) {
int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
return String.valueOf(c);
}
private List splitLines(String text) {
ArrayList l = new ArrayList();
int idx = 0;
while ( true ) {
int eol = text.indexOf("\n", idx);
if ( eol == -1 ) {
l.add(text.substring(idx));
break;
}
else {
l.add(text.substring(idx, eol+1));
idx = eol+1;
}
}
return l;
}
public List postProcessAction(List chunks, antlr.Token actionToken) {
/* TODO
- check for and report TAB usage
*/
//System.out.println("\n*** Action at " + actionToken.getLine() + ":" + actionToken.getColumn());
/* First I create a new list of chunks. String chunks are splitted into
lines and some whitespace my be added at the beginning.
As a result I get a list of chunks
- where the first line starts at column 0
- where every LF is at the end of a string chunk
*/
List nChunks = new ArrayList();
for (int i = 0; i < chunks.size(); i++) {
Object chunk = chunks.get(i);
if ( chunk instanceof String ) {
String text = (String)chunks.get(i);
if ( nChunks.size() == 0 && actionToken.getColumn() > 0 ) {
// first chunk and some 'virtual' WS at beginning
// prepend to this chunk
String ws = "";
for ( int j = 0 ; j < actionToken.getColumn() ; j++ ) {
ws += " ";
}
text = ws + text;
}
List parts = splitLines(text);
for ( int j = 0 ; j < parts.size() ; j++ ) {
chunk = parts.get(j);
nChunks.add(chunk);
}
}
else {
if ( nChunks.size() == 0 && actionToken.getColumn() > 0 ) {
// first chunk and some 'virtual' WS at beginning
// add as a chunk of its own
String ws = "";
for ( int j = 0 ; j < actionToken.getColumn() ; j++ ) {
ws += " ";
}
nChunks.add(ws);
}
nChunks.add(chunk);
}
}
int lineNo = actionToken.getLine();
int col = 0;
// strip trailing empty lines
int lastChunk = nChunks.size() - 1;
while ( lastChunk > 0
&& nChunks.get(lastChunk) instanceof String
&& ((String)nChunks.get(lastChunk)).trim().length() == 0 )
lastChunk--;
// string leading empty lines
int firstChunk = 0;
while ( firstChunk <= lastChunk
&& nChunks.get(firstChunk) instanceof String
&& ((String)nChunks.get(firstChunk)).trim().length() == 0
&& ((String)nChunks.get(firstChunk)).endsWith("\n") ) {
lineNo++;
firstChunk++;
}
int indent = -1;
for ( int i = firstChunk ; i <= lastChunk ; i++ ) {
Object chunk = nChunks.get(i);
//System.out.println(lineNo + ":" + col + " " + quote(chunk.toString()));
if ( chunk instanceof String ) {
String text = (String)chunk;
if ( col == 0 ) {
if ( indent == -1 ) {
// first non-blank line
// count number of leading whitespaces
indent = 0;
for ( int j = 0; j < text.length(); j++ ) {
if ( !Character.isWhitespace(text.charAt(j)) )
break;
indent++;
}
}
if ( text.length() >= indent ) {
int j;
for ( j = 0; j < indent ; j++ ) {
if ( !Character.isWhitespace(text.charAt(j)) ) {
// should do real error reporting here...
System.err.println("Warning: badly indented line " + lineNo + " in action:");
System.err.println(text);
break;
}
}
nChunks.set(i, text.substring(j));
}
else if ( text.trim().length() > 0 ) {
// should do real error reporting here...
System.err.println("Warning: badly indented line " + lineNo + " in action:");
System.err.println(text);
}
}
if ( text.endsWith("\n") ) {
lineNo++;
col = 0;
}
else {
col += text.length();
}
}
else {
// not really correct, but all I need is col to increment...
col += 1;
}
}
return nChunks;
}
}

View File

@ -0,0 +1,73 @@
/*
[The "BSD licence"]
Copyright (c) 2005 Martin Traverso
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
public class RubyTarget
extends Target
{
public String getTargetCharLiteralFromANTLRCharLiteral(
CodeGenerator generator,
String literal)
{
literal = literal.substring(1, literal.length() - 1);
String result = "?";
if (literal.equals("\\")) {
result += "\\\\";
}
else if (literal.equals(" ")) {
result += "\\s";
}
else if (literal.startsWith("\\u")) {
result = "0x" + literal.substring(2);
}
else {
result += literal;
}
return result;
}
public int getMaxCharValue(CodeGenerator generator)
{
// we don't support unicode, yet.
return 0xFF;
}
public String getTokenTypeAsTargetLabel(CodeGenerator generator, int ttype)
{
String name = generator.grammar.getTokenDisplayName(ttype);
// If name is a literal, return the token type instead
if ( name.charAt(0)=='\'' ) {
return generator.grammar.computeTokenNameFromLiteral(ttype, name);
}
return name;
}
}

View File

@ -0,0 +1,303 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.codegen;
import org.antlr.Tool;
import org.antlr.analysis.Label;
import org.antlr.misc.Utils;
import org.antlr.stringtemplate.StringTemplate;
import org.antlr.tool.Grammar;
import java.io.IOException;
import java.util.List;
/** The code generator for ANTLR can usually be retargeted just by providing
* a new X.stg file for language X, however, sometimes the files that must
* be generated vary enough that some X-specific functionality is required.
* For example, in C, you must generate header files whereas in Java you do not.
* Other languages may want to keep DFA separate from the main
* generated recognizer file.
*
* The notion of a Code Generator target abstracts out the creation
* of the various files. As new language targets get added to the ANTLR
* system, this target class may have to be altered to handle more
* functionality. Eventually, just about all language generation issues
* will be expressible in terms of these methods.
*
* If org.antlr.codegen.XTarget class exists, it is used else
* Target base class is used. I am using a superclass rather than an
* interface for this target concept because I can add functionality
* later without breaking previously written targets (extra interface
* methods would force adding dummy functions to all code generator
* target classes).
*
*/
public class Target {
/** For pure strings of Java 16-bit unicode char, how can we display
* it in the target language as a literal. Useful for dumping
* predicates and such that may refer to chars that need to be escaped
* when represented as strings. Also, templates need to be escaped so
* that the target language can hold them as a string.
*
* I have defined (via the constructor) the set of typical escapes,
* but your Target subclass is free to alter the translated chars or
* add more definitions. This is nonstatic so each target can have
* a different set in memory at same time.
*/
protected String[] targetCharValueEscape = new String[255];
public Target() {
targetCharValueEscape['\n'] = "\\n";
targetCharValueEscape['\r'] = "\\r";
targetCharValueEscape['\t'] = "\\t";
targetCharValueEscape['\b'] = "\\b";
targetCharValueEscape['\f'] = "\\f";
targetCharValueEscape['\\'] = "\\\\";
targetCharValueEscape['\''] = "\\'";
targetCharValueEscape['"'] = "\\\"";
}
protected void genRecognizerFile(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate outputFileST)
throws IOException
{
String fileName =
generator.getRecognizerFileName(grammar.name, grammar.type);
generator.write(outputFileST, fileName);
}
protected void genRecognizerHeaderFile(Tool tool,
CodeGenerator generator,
Grammar grammar,
StringTemplate headerFileST,
String extName) // e.g., ".h"
throws IOException
{
// no header file by default
}
protected void performGrammarAnalysis(CodeGenerator generator,
Grammar grammar)
{
// Build NFAs from the grammar AST
grammar.buildNFA();
// Create the DFA predictors for each decision
grammar.createLookaheadDFAs();
}
/** Is scope in @scope::name {action} valid for this kind of grammar?
* Targets like C++ may want to allow new scopes like headerfile or
* some such. The action names themselves are not policed at the
* moment so targets can add template actions w/o having to recompile
* ANTLR.
*/
public boolean isValidActionScope(int grammarType, String scope) {
switch (grammarType) {
case Grammar.LEXER :
if ( scope.equals("lexer") ) {return true;}
break;
case Grammar.PARSER :
if ( scope.equals("parser") ) {return true;}
break;
case Grammar.COMBINED :
if ( scope.equals("parser") ) {return true;}
if ( scope.equals("lexer") ) {return true;}
break;
case Grammar.TREE_PARSER :
if ( scope.equals("treeparser") ) {return true;}
break;
}
return false;
}
/** Target must be able to override the labels used for token types */
public String getTokenTypeAsTargetLabel(CodeGenerator generator, int ttype) {
String name = generator.grammar.getTokenDisplayName(ttype);
// If name is a literal, return the token type instead
if ( name.charAt(0)=='\'' ) {
return String.valueOf(ttype);
}
return name;
}
/** Convert from an ANTLR char literal found in a grammar file to
* an equivalent char literal in the target language. For most
* languages, this means leaving 'x' as 'x'. Actually, we need
* to escape '\u000A' so that it doesn't get converted to \n by
* the compiler. Convert the literal to the char value and then
* to an appropriate target char literal.
*
* Expect single quotes around the incoming literal.
*/
public String getTargetCharLiteralFromANTLRCharLiteral(
CodeGenerator generator,
String literal)
{
StringBuffer buf = new StringBuffer();
buf.append('\'');
int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
if ( c<Label.MIN_CHAR_VALUE ) {
return "'\u0000'";
}
if ( c<targetCharValueEscape.length &&
targetCharValueEscape[c]!=null )
{
buf.append(targetCharValueEscape[c]);
}
else if ( Character.UnicodeBlock.of((char)c)==
Character.UnicodeBlock.BASIC_LATIN &&
!Character.isISOControl((char)c) )
{
// normal char
buf.append((char)c);
}
else {
// must be something unprintable...use \\uXXXX
// turn on the bit above max "\\uFFFF" value so that we pad with zeros
// then only take last 4 digits
String hex = Integer.toHexString(c|0x10000).toUpperCase().substring(1,5);
buf.append("\\u");
buf.append(hex);
}
buf.append('\'');
return buf.toString();
}
/** Convert from an ANTLR string literal found in a grammar file to
* an equivalent string literal in the target language. For Java, this
* is the translation 'a\n"' -> "a\n\"". Expect single quotes
* around the incoming literal. Just flip the quotes and replace
* double quotes with \"
*/
public String getTargetStringLiteralFromANTLRStringLiteral(
CodeGenerator generator,
String literal)
{
literal = Utils.replace(literal,"\\\"","\""); // \" to " to normalize
literal = Utils.replace(literal,"\"","\\\""); // " to \" to escape all
StringBuffer buf = new StringBuffer(literal);
buf.setCharAt(0,'"');
buf.setCharAt(literal.length()-1,'"');
return buf.toString();
}
/** Given a random string of Java unicode chars, return a new string with
* optionally appropriate quote characters for target language and possibly
* with some escaped characters. For example, if the incoming string has
* actual newline characters, the output of this method would convert them
* to the two char sequence \n for Java, C, C++, ... The new string has
* double-quotes around it as well. Example String in memory:
*
* a"[newlinechar]b'c[carriagereturnchar]d[tab]e\f
*
* would be converted to the valid Java s:
*
* "a\"\nb'c\rd\te\\f"
*
* or
*
* a\"\nb'c\rd\te\\f
*
* depending on the quoted arg.
*/
public String getTargetStringLiteralFromString(String s, boolean quoted) {
if ( s==null ) {
return null;
}
StringBuffer buf = new StringBuffer();
if ( quoted ) {
buf.append('"');
}
for (int i=0; i<s.length(); i++) {
int c = s.charAt(i);
if ( c!='\'' && // don't escape single quotes in strings for java
c<targetCharValueEscape.length &&
targetCharValueEscape[c]!=null )
{
buf.append(targetCharValueEscape[c]);
}
else {
buf.append((char)c);
}
}
if ( quoted ) {
buf.append('"');
}
return buf.toString();
}
public String getTargetStringLiteralFromString(String s) {
return getTargetStringLiteralFromString(s, false);
}
/** Convert long to 0xNNNNNNNNNNNNNNNN by default for spitting out
* with bitsets. I.e., convert bytes to hex string.
*/
public String getTarget64BitStringFromValue(long word) {
int numHexDigits = 8*2;
StringBuffer buf = new StringBuffer(numHexDigits+2);
buf.append("0x");
String digits = Long.toHexString(word);
digits = digits.toUpperCase();
int padding = numHexDigits - digits.length();
// pad left with zeros
for (int i=1; i<=padding; i++) {
buf.append('0');
}
buf.append(digits);
return buf.toString();
}
public String encodeIntAsCharEscape(int v) {
if ( v<=127 ) {
return "\\"+Integer.toOctalString(v);
}
String hex = Integer.toHexString(v|0x10000).substring(1,5);
return "\\u"+hex;
}
/** Some targets only support ASCII or 8-bit chars/strings. For example,
* C++ will probably want to return 0xFF here.
*/
public int getMaxCharValue(CodeGenerator generator) {
return Label.MAX_CHAR_VALUE;
}
/** Give target a chance to do some postprocessing on actions.
* Python for example will have to fix the indention.
*/
public List postProcessAction(List chunks, antlr.Token actionToken) {
return chunks;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,375 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
interface ANTLRCore;
/** The overall file structure of a recognizer; stores methods for rules
* and cyclic DFAs plus support code.
*/
outputFile(LEXER,PARSER,TREE_PARSER, actionScope, actions,
docComment, recognizer,
name, tokens, tokenNames, rules, cyclicDFAs,
bitsets, buildTemplate, buildAST, rewriteMode, profile,
backtracking, synpreds, memoize, numRules,
fileName, ANTLRVersion, generatedTimestamp, trace,
scopes, superClass, literals);
/** The header file; make sure to define headerFileExtension() below */
optional
headerFile(LEXER,PARSER,TREE_PARSER, actionScope, actions,
docComment, recognizer,
name, tokens, tokenNames, rules, cyclicDFAs,
bitsets, buildTemplate, buildAST, rewriteMode, profile,
backtracking, synpreds, memoize, numRules,
fileName, ANTLRVersion, generatedTimestamp, trace,
scopes, superClass, literals);
lexer(grammar, name, tokens, scopes, rules, numRules, labelType,
filterMode, superClass);
parser(grammar, name, scopes, tokens, tokenNames, rules, numRules,
bitsets, ASTLabelType, superClass,
labelType, members);
/** How to generate a tree parser; same as parser except the input
* stream is a different type.
*/
treeParser(grammar, name, scopes, tokens, tokenNames, globalAction, rules,
numRules, bitsets, labelType, ASTLabelType,
superClass, members);
/** A simpler version of a rule template that is specific to the imaginary
* rules created for syntactic predicates. As they never have return values
* nor parameters etc..., just give simplest possible method. Don't do
* any of the normal memoization stuff in here either; it's a waste.
* As predicates cannot be inlined into the invoking rule, they need to
* be in a rule by themselves.
*/
synpredRule(ruleName, ruleDescriptor, block, description, nakedBlock);
/** How to generate code for a rule. This includes any return type
* data aggregates required for multiple return values.
*/
rule(ruleName,ruleDescriptor,block,emptyRule,description,exceptions,finally,memoize);
/** How to generate a rule in the lexer; naked blocks are used for
* fragment rules.
*/
lexerRule(ruleName,nakedBlock,ruleDescriptor,block,memoize);
/** How to generate code for the implicitly-defined lexer grammar rule
* that chooses between lexer rules.
*/
tokensRule(ruleName,nakedBlock,args,block,ruleDescriptor);
filteringNextToken();
filteringActionGate();
// S U B R U L E S
/** A (...) subrule with multiple alternatives */
block(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
/** A rule block with multiple alternatives */
ruleBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
ruleBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,description);
/** A special case of a (...) subrule with a single alternative */
blockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,description);
/** A (..)+ block with 0 or more alternatives */
positiveClosureBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
positiveClosureBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
/** A (..)* block with 0 or more alternatives */
closureBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
closureBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
/** Optional blocks (x)? are translated to (x|) by before code generation
* so we can just use the normal block template
*/
optionalBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
optionalBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
/** An alternative is just a list of elements; at outermost level */
alt(elements,altNum,description,autoAST,outerAlt,treeLevel,rew);
// E L E M E N T S
/** match a token optionally with a label in front */
tokenRef(token,label,elementIndex,hetero);
/** ids+=ID */
tokenRefAndListLabel(token,label,elementIndex,hetero);
listLabel(label,elem);
/** match a character */
charRef(char,label);
/** match a character range */
charRangeRef(a,b,label);
/** For now, sets are interval tests and must be tested inline */
matchSet(s,label,elementIndex,postmatchCode);
matchSetAndListLabel(s,label,elementIndex,postmatchCode);
/** Match a string literal */
lexerStringRef(string,label);
wildcard(label,elementIndex);
wildcardAndListLabel(label,elementIndex);
/** Match . wildcard in lexer */
wildcardChar(label, elementIndex);
wildcardCharListLabel(label, elementIndex);
/** Match a rule reference by invoking it possibly with arguments
* and a return value or values.
*/
ruleRef(rule,label,elementIndex,args,scope);
/** ids+=ID */
ruleRefAndListLabel(rule,label,elementIndex,args,scope);
/** A lexer rule reference */
lexerRuleRef(rule,label,args,elementIndex,scope);
/** i+=INT in lexer */
lexerRuleRefAndListLabel(rule,label,args,elementIndex,scope);
/** EOF in the lexer */
lexerMatchEOF(label,elementIndex);
/** match ^(root children) in tree parser */
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel);
/** Every predicate is used as a validating predicate (even when it is
* also hoisted into a prediction expression).
*/
validateSemanticPredicate(pred,description);
// F i x e d D F A (if-then-else)
dfaState(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
/** Same as a normal DFA state except that we don't examine lookahead
* for the bypass alternative. It delays error detection but this
* is faster, smaller, and more what people expect. For (X)? people
* expect "if ( LA(1)==X ) match(X);" and that's it.
*
* If a semPredState, don't force lookahead lookup; preds might not
* need.
*/
dfaOptionalBlockState(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
/** A DFA state that is actually the loopback decision of a closure
* loop. If end-of-token (EOT) predicts any of the targets then it
* should act like a default clause (i.e., no error can be generated).
* This is used only in the lexer so that for ('a')* on the end of a
* rule anything other than 'a' predicts exiting.
*
* If a semPredState, don't force lookahead lookup; preds might not
* need.
*/
dfaLoopbackState(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
/** An accept state indicates a unique alternative has been predicted */
dfaAcceptState(alt);
/** A simple edge with an expression. If the expression is satisfied,
* enter to the target state. To handle gated productions, we may
* have to evaluate some predicates for this edge.
*/
dfaEdge(labelExpr, targetState, predicates);
// F i x e d D F A (switch case)
/** A DFA state where a SWITCH may be generated. The code generator
* decides if this is possible: CodeGenerator.canGenerateSwitch().
*/
dfaStateSwitch(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
dfaOptionalBlockStateSwitch(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
dfaLoopbackStateSwitch(k, edges,eotPredictsAlt,description,stateNumber,semPredState);
dfaEdgeSwitch(labels, targetState);
// C y c l i c D F A
/** The code to initiate execution of a cyclic DFA; this is used
* in the rule to predict an alt just like the fixed DFA case.
* The <name> attribute is inherited via the parser, lexer, ...
*/
dfaDecision(decisionNumber,description);
/** Generate the tables and support code needed for the DFAState object
* argument. Unless there is a semantic predicate (or syn pred, which
* become sem preds), all states should be encoded in the state tables.
* Consequently, cyclicDFAState/cyclicDFAEdge,eotDFAEdge templates are
* not used except for special DFA states that cannot be encoded as
* a transition table.
*/
cyclicDFA(dfa);
/** A special state in a cyclic DFA; special means has a semantic predicate
* or it's a huge set of symbols to check.
*/
cyclicDFAState(decisionNumber,stateNumber,edges,needErrorClause,semPredState);
/** Just like a fixed DFA edge, test the lookahead and indicate what
* state to jump to next if successful. Again, this is for special
* states.
*/
cyclicDFAEdge(labelExpr, targetStateNumber, edgeNumber, predicates);
/** An edge pointing at end-of-token; essentially matches any char;
* always jump to the target.
*/
eotDFAEdge(targetStateNumber,edgeNumber, predicates);
// D F A E X P R E S S I O N S
andPredicates(left,right);
orPredicates(operands);
notPredicate(pred);
evalPredicate(pred,description);
evalSynPredicate(pred,description);
lookaheadTest(atom,k,atomAsInt);
/** Sometimes a lookahead test cannot assume that LA(k) is in a temp variable
* somewhere. Must ask for the lookahead directly.
*/
isolatedLookaheadTest(atom,k,atomAsInt);
lookaheadRangeTest(lower,upper,k,rangeNumber,lowerAsInt,upperAsInt);
isolatedLookaheadRangeTest(lower,upper,k,rangeNumber,lowerAsInt,upperAsInt);
setTest(ranges);
// A T T R I B U T E S
parameterAttributeRef(attr);
parameterSetAttributeRef(attr,expr);
scopeAttributeRef(scope,attr,index,negIndex);
scopeSetAttributeRef(scope,attr,expr,index,negIndex);
/** $x is either global scope or x is rule with dynamic scope; refers
* to stack itself not top of stack. This is useful for predicates
* like {$function.size()>0 && $function::name.equals("foo")}?
*/
isolatedDynamicScopeRef(scope);
/** reference an attribute of rule; might only have single return value */
ruleLabelRef(referencedRule,scope,attr);
returnAttributeRef(ruleDescriptor,attr);
returnSetAttributeRef(ruleDescriptor,attr,expr);
/** How to translate $tokenLabel */
tokenLabelRef(label);
/** ids+=ID {$ids} or e+=expr {$e} */
listLabelRef(label);
// not sure the next are the right approach; and they are evaluated early;
// they cannot see TREE_PARSER or PARSER attributes for example. :(
tokenLabelPropertyRef_text(scope,attr);
tokenLabelPropertyRef_type(scope,attr);
tokenLabelPropertyRef_line(scope,attr);
tokenLabelPropertyRef_pos(scope,attr);
tokenLabelPropertyRef_channel(scope,attr);
tokenLabelPropertyRef_index(scope,attr);
tokenLabelPropertyRef_tree(scope,attr);
ruleLabelPropertyRef_start(scope,attr);
ruleLabelPropertyRef_stop(scope,attr);
ruleLabelPropertyRef_tree(scope,attr);
ruleLabelPropertyRef_text(scope,attr);
ruleLabelPropertyRef_st(scope,attr);
/** Isolated $RULE ref ok in lexer as it's a Token */
lexerRuleLabel(label);
lexerRuleLabelPropertyRef_type(scope,attr);
lexerRuleLabelPropertyRef_line(scope,attr);
lexerRuleLabelPropertyRef_pos(scope,attr);
lexerRuleLabelPropertyRef_channel(scope,attr);
lexerRuleLabelPropertyRef_index(scope,attr);
lexerRuleLabelPropertyRef_text(scope,attr);
// Somebody may ref $template or $tree or $stop within a rule:
rulePropertyRef_start(scope,attr);
rulePropertyRef_stop(scope,attr);
rulePropertyRef_tree(scope,attr);
rulePropertyRef_text(scope,attr);
rulePropertyRef_st(scope,attr);
lexerRulePropertyRef_text(scope,attr);
lexerRulePropertyRef_type(scope,attr);
lexerRulePropertyRef_line(scope,attr);
lexerRulePropertyRef_pos(scope,attr);
/** Undefined, but present for consistency with Token attributes; set to -1 */
lexerRulePropertyRef_index(scope,attr);
lexerRulePropertyRef_channel(scope,attr);
lexerRulePropertyRef_start(scope,attr);
lexerRulePropertyRef_stop(scope,attr);
ruleSetPropertyRef_tree(scope,attr,expr);
ruleSetPropertyRef_st(scope,attr,expr);
/** How to execute an action */
execAction(action);
// M I S C (properties, etc...)
codeFileExtension();
/** Your language needs a header file; e.g., ".h" */
optional headerFileExtension();
true();
false();

View File

@ -0,0 +1,391 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
@outputFile.imports() ::= <<
<@super.imports()>
<if(!TREE_PARSER)><! tree parser would already have imported !>
import org.antlr.runtime.tree.*;<\n>
<endif>
>>
@genericParser.members() ::= <<
<@super.members()>
<parserMembers()>
>>
/** Add an adaptor property that knows how to build trees */
parserMembers() ::= <<
protected var adaptor:TreeAdaptor = new CommonTreeAdaptor();<\n>
public function set treeAdaptor(adaptor:TreeAdaptor):void {
this.adaptor = adaptor;
}
public function get treeAdaptor():TreeAdaptor {
return adaptor;
}
>>
@returnScope.ruleReturnMembers() ::= <<
<ASTLabelType> tree;
public function get tree():Object { return tree; }
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
var root_0:<ASTLabelType> = null;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{var <it.label.text>_tree:<ASTLabelType>=null;}; separator="\n">
<ruleDescriptor.tokenListLabels:{var <it.label.text>_tree:<ASTLabelType>=null;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{var stream_<it>:RewriteRule<rewriteElementType>Stream=new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>");}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{var stream_<it>:RewriteRuleSubtreeStream=new RewriteRuleSubtreeStream(adaptor,"rule <it>");}; separator="\n">
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = <ASTLabelType>(adaptor.nil());<\n>
<endif>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.add(<label>);<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.add(<label>);<\n>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( this.state.backtracking==0 ) <endif>stream_<rule.name>.add(<label>.tree);
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label+".tree",...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule>.add(<label>.tree);
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem=label+".tree",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
// AST REWRITE
// elements: <referencedElementsDeep; separator=", ">
// token labels: <referencedTokenLabels; separator=", ">
// rule labels: <referencedRuleLabels; separator=", ">
// token list labels: <referencedTokenListLabels; separator=", ">
// rule list labels: <referencedRuleListLabels; separator=", ">
<if(backtracking)>
if ( this.state.backtracking==0 ) {<\n>
<endif>
<prevRuleRootRef()>.tree = root_0;
<rewriteCodeLabels()>
root_0 = <ASTLabelType>(adaptor.nil());
<alts:rewriteAlt(); separator="else ">
<! if tree parser and rewrite=true !>
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.tree = <ASTLabelType>(adaptor.rulePostProcessing(root_0));
input.replaceChildren(adaptor.getParent(retval.start),
adaptor.getChildIndex(retval.start),
adaptor.getChildIndex(_last),
retval.tree);
<endif>
<endif>
<! if parser or rewrite!=true, we need to set result !>
<if(!TREE_PARSER)>
<prevRuleRootRef()>.tree = root_0;
<endif>
<if(!rewriteMode)>
<prevRuleRootRef()>.tree = root_0;
<endif>
<if(backtracking)>
}
<endif>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{var stream_<it>:RewriteRule<rewriteElementType>Stream=new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>",<it>);};
separator="\n"
>
<referencedTokenListLabels
:{var stream_<it>:RewriteRule<rewriteElementType>Stream=new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>", list_<it>);};
separator="\n"
>
<referencedRuleLabels
:{var stream_<it>:RewriteRuleSubtreeStream=new RewriteRuleSubtreeStream(adaptor,"token <it>",<it>!=null?<it>.tree:null);};
separator="\n"
>
<referencedRuleListLabels
:{var stream_<it>:RewriteRuleSubtreeStream=new RewriteRuleSubtreeStream(adaptor,"token <it>",list_<it>);};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
if ( <referencedElementsDeep:{el | stream_<el>.hasNext}; separator="||"> ) {
<alt>
}
<referencedElementsDeep:{el | stream_<el>.reset();<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
while ( <referencedElements:{el | stream_<el>.hasNext}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
if ( !(<referencedElements:{el | stream_<el>.hasNext}; separator="||">) ) {
throw new RewriteEarlyExitException();
}
while ( <referencedElements:{el | stream_<el>.hasNext}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>) {
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = null;"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
var root_<treeLevel>:<ASTLabelType> = <ASTLabelType>(adaptor.nil());
<root:rewriteElement()>
<children:rewriteElement()>
adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>);
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
adaptor.addChild(root_<treeLevel>, <createRewriteNodeFromElement(...)>);<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode());<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode());<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>));<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(<createRewriteNodeFromElement(...)>, root_<treeLevel>));<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
adaptor.addChild(root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>));<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(<createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>));<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<rule>.nextTree());<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(stream_<rule>.nextNode(), root_<treeLevel>));<\n>
>>
rewriteNodeAction(action) ::= <<
adaptor.addChild(root_<treeLevel>, <action>);<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(<action>, root_<treeLevel>));<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree());<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree());<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>));<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>));<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
new <hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
<ASTLabelType>(adaptor.create(<tokenType>, <args; separator=", "><if(!args)>"<tokenType>"<endif>))
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
adaptor.create(<token>, <args; separator=", ">)
<else>
stream_<token>.nextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,190 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
@rule.setErrorReturnValue() ::= <<
retval.tree = <ASTLabelType>(adaptor.errorNode(input, Token(retval.start), input.LT(-1), re));
<! trace("<ruleName> returns "+((CommonTree)retval.tree).toStringTree()); !>
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
adaptor.addChild(root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
root_0 = <ASTLabelType>(adaptor.becomeRoot(<label>_tree, root_0));
<if(backtracking)>}<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking==0 ) <endif>adaptor.addChild(root_0, <createNodeFromToken(...)>);})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label>=<labelType>(input.LT(1));<\n>
<endif>
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking==0 ) <endif>root_0 = <ASTLabelType>(adaptor.becomeRoot(<createNodeFromToken(...)>, root_0));})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>adaptor.addChild(root_0, <label>.tree);
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>root_0 = <ASTLabelType>(adaptor.becomeRoot(<label>.tree, root_0));
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".tree",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem=label+".tree",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".tree",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = <ASTLabelType>(adaptor.create(<label>));
adaptor.addChild(root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = <ASTLabelType>(adaptor.create(<label>));
root_0 = <ASTLabelType>(adaptor.becomeRoot(<label>_tree, root_0));
<if(backtracking)>}<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
new <hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
<ASTLabelType>(adaptor.create(<label>))
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(backtracking)>if ( state.backtracking==0 ) {<\n><endif>
retval.tree = <ASTLabelType>(adaptor.rulePostProcessing(root_0));
adaptor.setTokenBoundaries(retval.tree, Token(retval.start), Token(retval.stop));
<if(backtracking)>}<endif>
>>

View File

@ -0,0 +1,257 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
var _first_0:<ASTLabelType> = null;
var _last:<ASTLabelType> = null;<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(rewriteMode)>
retval.tree = <ASTLabelType>(_first_0);
if ( adaptor.getParent(retval.tree)!=null && adaptor.isNil( adaptor.getParent(retval.tree) ) )
retval.tree = <ASTLabelType>(adaptor.getParent(retval.tree));
<endif>
<if(backtracking)>}<endif>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = <ASTLabelType>(input.LT(1));
{
var _save_last_<treeLevel>:<ASTLabelType> = _last;
var _first_<treeLevel>:<ASTLabelType> = null;
<if(!rewriteMode)>
var root_<treeLevel>:<ASTLabelType> = <ASTLabelType>(adaptor.nil());
<endif>
<root:element()>
<if(rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 )<endif>
<if(root.el.rule)>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>.tree;
<else>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>;
<endif>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( input.LA(1)==TokenConstants.DOWN ) {
match(input, TokenConstants.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
match(input, TokenConstants.UP, null); <checkRuleBacktrackFailure()>
}
<else>
match(input, TokenConstants.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
match(input, TokenConstants.UP, null); <checkRuleBacktrackFailure()>
<endif>
<if(!rewriteMode)>
adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>);
<endif>
_last = _save_last_<treeLevel>;
}<\n>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = <ASTLabelType>(adaptor.dupNode(<label>));
<endif><\n>
adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<else> <! rewrite mode !>
<if(backtracking)>if ( state.backtracking==0 )<endif>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>;
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = <ASTLabelType>(adaptor.dupNode(<label>));
<endif><\n>
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(<label>_tree, root_<treeLevel>));
<if(backtracking)>}<endif>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = <ASTLabelType>(adaptor.dupNode(<label>));
<endif><\n>
adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = <ASTLabelType>(adaptor.dupNode(<label>));
<endif><\n>
root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(<label>_tree, root_<treeLevel>));
<if(backtracking)>}<endif>
<endif>
}
)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>
<if(!rewriteMode)>
adaptor.addChild(root_<treeLevel>, <label>.getTree());
<else> <! rewrite mode !>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>.tree;
<endif>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".tree",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = <ASTLabelType>(input.LT(1));
<super.ruleRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>root_<treeLevel> = <ASTLabelType>(adaptor.becomeRoot(<label>.tree, root_<treeLevel>));
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".tree",...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextNode())
<else>
stream_<token>.nextNode()
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<\n><endif>
retval.tree = <ASTLabelType>(adaptor.rulePostProcessing(root_0));
<if(backtracking)>}<endif>
<endif>
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,547 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
/** Add an adaptor property that knows how to build trees */
@headerFile.members() ::= <<
/* @headerFile.members() */
pANTLR3_BASE_TREE_ADAPTOR adaptor;
pANTLR3_VECTOR_FACTORY vectors;
/* End @headerFile.members() */
>>
/** Install the tree adaptor interface pointer and anything else that
* tree parsers and producers require.
*/
@genericParser.apifuncs() ::= <<
<if(PARSER)>
ADAPTOR = ANTLR3_TREE_ADAPTORNew(instream->tstream->tokenSource->strFactory);<\n>
<endif>
ctx->vectors = antlr3VectorFactoryNew(64);
>>
@genericParser.cleanup() ::= <<
ctx->vectors->close(ctx->vectors);
<if(PARSER)>
/* We created the adaptor so we must free it
*/
ADAPTOR->free(ADAPTOR);
<endif>
>>
@returnScope.ruleReturnMembers() ::= <<
<recognizer.ASTLabelType> tree;
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> root_0;<\n>
>>
ruleInitializations() ::= <<
<super.ruleInitializations()>
root_0 = NULL;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{<ASTLabelType> <it.label.text>_tree;}; separator="\n">
<ruleDescriptor.tokenListLabels:{<ASTLabelType> <it.label.text>_tree;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{pANTLR3_REWRITE_RULE_<rewriteElementType>_STREAM stream_<it>;}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{pANTLR3_REWRITE_RULE_SUBTREE_STREAM stream_<it>;}; separator="\n">
>>
ruleLabelInitializations() ::= <<
<super.ruleLabelInitializations()>
<ruleDescriptor.tokenLabels:{<it.label.text>_tree = NULL;}; separator="\n">
<ruleDescriptor.tokenListLabels:{<it.label.text>_tree = NULL;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{stream_<it> = antlr3RewriteRule<rewriteElementType>StreamNewAE(ADAPTOR, RECOGNIZER, (pANTLR3_UINT8)"token <it>"); }; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{stream_<it>=antlr3RewriteRuleSubtreeStreamNewAE(ADAPTOR, RECOGNIZER, (pANTLR3_UINT8)"rule <it>");}; separator="\n">
<if(ruleDescriptor.hasMultipleReturnValues)>
retval.tree = NULL;
<endif>
>>
/** a rule label including default value */
ruleLabelInitVal(label) ::= <<
<super.ruleLabelInitVal(...)>
<label.label.text>.tree = <initValue(typeName=ruleLabelType(referencedRule=label.referencedRule))>;<\n>
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<endif>
<endif>
>>
@alt.initializations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = (<ASTLabelType>)(ADAPTOR->nilNode(ADAPTOR));<\n>
<endif>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
//
/** ID but track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<if(backtracking)>if ( BACKTRACKING==0 ) <endif>stream_<token>->add(stream_<token>, <label>, NULL);<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) <endif>stream_<token>->add(stream_<token>, <label>, NULL);<\n>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) <endif>stream_<rule.name>->add(stream_<rule.name>, <label>.tree, NULL);
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabelTrack(...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) <endif>stream_<rule.name>->add(stream_<rule.name>, <label>.tree, NULL);
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabelAST(...)>
>>
// RULE REF AST
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
/* How to accumulate lists when we are doing rewrite tracking...
*/
listLabelTrack(label) ::= <<
/* listLabelTrack(label)
*/
if (list_<label> == NULL)
{
list_<label>=ctx->vectors->newVector(ctx->vectors);
}
list_<label>->add(list_<label>, <label>.tree, NULL);
>>
/* How to accumulate lists of rule outputs (only allowed with AST
* option but if the user is going to walk the tree, they will want
* all their custom elements from rule returns.
*
* Normally, we use inline structures (which the compiler lays down
* code to copy from heap allocations. However, here we want to accumulate copies
* of the returned structures because we are adding them to a list. This only makes sense if the
* grammar is not rewriting the tree as a tree rewrite only preserves the tree, not the object/structure
* returned from the rule. The rewrite will extract the tree pointer. However, if we are not going to
* do a tree re-write, then the user may wish to iterate the structures returned by the rule in
* action code and will expect the user defined returns[] elements to be available when they do this.
* Hence we cannot just preserve the tree that was returned. So, we must copy the local structure and provide
* a function that can free the allocated space. We cannot know how to free user allocated elements and
* presume that the user will know to do this using their own factories for the structures they allocate.
*/
listLabelAST(label) ::= <<
if (list_<label> == NULL)
{
list_<label>=ctx->vectors->newVector(ctx->vectors);
}
{
RETURN_TYPE_<label> * tcopy;
tcopy = ANTLR3_CALLOC(1, sizeof(RETURN_TYPE_<label>)); /* Note no memory allocation checks! */
ANTLR3_MEMMOVE((void *)(tcopy), (const void *)&<label>, sizeof(RETURN_TYPE_<label>));
list_<label>->add(list_<label>, tcopy, freeScope); /* Add whatever the return type is */<\n>
}
>>
// R e w r i t e
rewriteCode(
alts,
description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel,
enclosingTreeLevel,
treeLevel) ::=
<<
/* AST REWRITE
* elements : <referencedElementsDeep; separator=", ">
* token labels : <referencedTokenLabels; separator=", ">
* rule labels : <referencedRuleLabels; separator=", ">
* token list labels : <referencedTokenListLabels; separator=", ">
* rule list labels : <referencedRuleListLabels; separator=", ">
*/
<if(backtracking)>
if ( BACKTRACKING==0 ) <\n>
<endif>
{
<rewriteCodeLabelsDecl()>
<rewriteCodeLabelsInit()>
root_0 = (<ASTLabelType>)(ADAPTOR->nilNode(ADAPTOR));
<prevRuleRootRef()>.tree = root_0;
<alts:rewriteAlt(); separator="else ">
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.tree = (<ASTLabelType>)(ADAPTOR->rulePostProcessing(ADAPTOR, root_0));
INPUT->replaceChildren(INPUT, ADAPTOR->getParent(ADAPTOR, retval.start),
ADAPTOR->getChildIndex(ADAPTOR, retval.start),
ADAPTOR->getChildIndex(ADAPTOR, _last),
retval.tree);
<endif>
<endif>
<prevRuleRootRef()>.tree = root_0; // set result root
<rewriteCodeLabelsFree()>
}
>>
rewriteCodeLabelsDecl() ::= <<
<referencedTokenLabels
:{pANTLR3_REWRITE_RULE_<rewriteElementType>_STREAM stream_<it>;};
separator="\n"
>
<referencedTokenListLabels
:{pANTLR3_REWRITE_RULE_<rewriteElementType>_STREAM stream_<it>;};
separator="\n"
>
<referencedRuleLabels
:{pANTLR3_REWRITE_RULE_SUBTREE_STREAM stream_<it>;};
separator="\n"
>
<referencedRuleListLabels
:{pANTLR3_REWRITE_RULE_SUBTREE_STREAM stream_<it>;};
separator="\n"
>
>>
rewriteCodeLabelsInit() ::= <<
<referencedTokenLabels
:{stream_<it>=antlr3RewriteRule<rewriteElementType>StreamNewAEE(ADAPTOR, RECOGNIZER, (pANTLR3_UINT8)"token <it>", <it>);};
separator="\n"
>
<referencedTokenListLabels
:{stream_<it>=antlr3RewriteRule<rewriteElementType>StreamNewAEV(ADAPTOR, RECOGNIZER, (pANTLR3_UINT8)"token <it>", list_<it>); };
separator="\n"
>
<referencedRuleLabels
:{stream_<it>=antlr3RewriteRuleSubtreeStreamNewAEE(ADAPTOR, RECOGNIZER, (pANTLR3_UINT8)"token <it>", <it>.tree != NULL ? <it>.tree : NULL);};
separator="\n"
>
<referencedRuleListLabels
:{stream_<it>=antlr3RewriteRuleSubtreeStreamNewAEV(ADAPTOR, RECOGNIZER, (pANTLR3_UINT8)"token <it>", list_<it>);};
separator="\n"
>
>>
rewriteCodeLabelsFree() ::= <<
<referencedTokenLabels
:{stream_<it>->free(stream_<it>);};
separator="\n"
>
<referencedTokenListLabels
:{stream_<it>->free(stream_<it>);};
separator="\n"
>
<referencedRuleLabels
:{stream_<it>->free(stream_<it>);};
separator="\n"
>
<referencedRuleListLabels
:{stream_<it>->free(stream_<it>);};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,
rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
{
if ( <referencedElementsDeep:{el | stream_<el>->hasNext(stream_<el>)}; separator="||"> )
{
<alt>
}
<referencedElementsDeep:{el | stream_<el>->reset(stream_<el>);<\n>}>
}<\n>
>>
rewriteClosureBlock(
alt,
rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
{
while ( <referencedElements:{el | stream_<el>->hasNext(stream_<el>)}; separator="||"> )
{
<alt>
}
<referencedElements:{el | stream_<el>->reset(stream_<el>);<\n>}>
}<\n>
>>
RewriteEarlyExitException() ::=
<<
CONSTRUCTEX();
EXCEPTION->type = ANTLR3_REWRITE_EARLY_EXCEPTION;
EXCEPTION->name = (void *)ANTLR3_REWRITE_EARLY_EXCEPTION_NAME;
>>
rewritePositiveClosureBlock(
alt,
rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
if ( !(<referencedElements:{el | stream_<el>->hasNext(stream_<el>)}; separator="||">) )
{
<RewriteEarlyExitException()>
}
else
{
while ( <referencedElements:{el | stream_<el>->hasNext(stream_<el>)}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>->reset(stream_<el>);<\n>}>
}
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>)
{
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = NULL; /* \<-- rewriteEmptyAlt()) */"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->nilNode(ADAPTOR));
<root:rewriteElement()>
<children:rewriteElement()>
ADAPTOR->addChild(ADAPTOR, root_<enclosingTreeLevel>, root_<treeLevel>);
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, <createRewriteNodeFromElement(...)>);<\n>
<endif>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, stream_<label>->nextNode(stream_<label>));<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, stream_<label>->nextNode(stream_<label>));<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRootToken(ADAPTOR, stream_<label>->nextToken(stream_<label>), root_<treeLevel>));<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <createRewriteNodeFromElement(...)>, root_<treeLevel>));<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>);<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>));<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule,dup) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, stream_<rule>->nextTree(stream_<rule>));<\n>
>>
rewriteRuleRefRoot(rule,dup) ::= <<
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, stream_<rule>->nextNode(stream_<rule>), root_<treeLevel>));<\n>
>>
rewriteNodeAction(action) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, <action>);<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = (<ASLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <action>, root_<treeLevel>));<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, stream_<label>->nextTree(stream_<label>));<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, stream_<label>->nextTree(stream_<label>));<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, stream_<label>->nextNode(stream_<label>), root_<treeLevel>));<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRoot((<ASTLabelType>)(stream_<label>->nextNode(stream_<label>), root_<treeLevel>));<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
<hetero>New(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
<if(args)>
#if <length(args)> == 2
(<ASTLabelType>)ADAPTOR->createTypeTokenText(ADAPTOR, <tokenType>, TOKTEXT(<args; separator=", ">))
#else
(<ASTLabelType>)ADAPTOR->createTypeText(ADAPTOR, <tokenType>, (pANTLR3_UINT8)<args; separator=", ">)
#endif
<else>
(<ASTLabelType>)ADAPTOR->createTypeText(ADAPTOR, <tokenType>, (pANTLR3_UINT8)"<tokenType>")
<endif>
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
<hetero>New(stream_<token>->nextToken(stream_<token>)<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
#if <length(args)> == 2
ADAPTOR->createTypeTokenText(ADAPTOR, <token>->getType(<token>, TOKTEXT(<token>, <args; separator=", ">)) /* JIMI */
#else
ADAPTOR->createTypeToken(ADAPTOR, <token>->getType(<token>, <token>, <args; separator=", ">)
#endif
<else>
stream_<token>->nextNode(stream_<token>)
<endif>
<endif>
>>

View File

@ -0,0 +1,76 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to AST stuff. Dynamic inheritance
* hierarchy is set up as ASTDbg : AST : Dbg : Java by code generator.
*/
group ASTDbg;
parserMembers() ::= <<
protected DebugTreeAdaptor adaptor =
new DebugTreeAdaptor(null,new CommonTreeAdaptor());
public void setTreeAdaptor(TreeAdaptor adaptor) {
this.adaptor = new DebugTreeAdaptor(dbg,adaptor);
}
public TreeAdaptor getTreeAdaptor() {
return adaptor;
}<\n>
>>
parserCtorBody() ::= <<
<super.parserCtorBody()>
adaptor.setDebugListener(dbg);
>>
createListenerAndHandshake() ::= <<
DebugEventSocketProxy proxy =
new DebugEventSocketProxy(this,port,<if(TREE_PARSER)>input.getTreeAdaptor()<else>adaptor<endif>);
adaptor.setDebugListener(proxy);
setDebugListener(proxy);
set<inputStreamType>(new Debug<inputStreamType>(input,proxy));
try {
proxy.handshake();
}
catch (IOException ioe) {
reportError(ioe);
}
>>
ctorForPredefinedListener() ::= <<
public <name>(<inputStreamType> input, DebugEventListener dbg) {
super(input, dbg);
<if(profile)>
Profiler p = (Profiler)dbg;
p.setParser(this);<\n>
<endif>
<parserCtorBody()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
}<\n>
>>
@rewriteElement.pregen() ::= "dbg.location(<e.line>,<e.pos>);"

View File

@ -0,0 +1,203 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
@rule.setErrorReturnValue() ::= <<
retval.tree = (<ASTLabelType>)(ADAPTOR->errorNode(ADAPTOR, INPUT, retval.start, LT(-1), EXCEPTION));
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) {<endif>
<label>_tree = ADAPTOR->create(ADAPTOR, <label>);
ADAPTOR->addChild(ADAPTOR, root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
root_0 = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <label>_tree, root_0));
<if(backtracking)>}<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( BACKTRACKING==0 ) <endif>ADAPTOR->addChild(ADAPTOR, root_0, <createNodeFromToken(...)>);})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label>=(<labelType>)LT(1);<\n>
<endif>
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( BACKTRACKING==0 ) <endif>root_0 = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <createNodeFromToken(...)>, root_0));})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) <endif>ADAPTOR->addChild(ADAPTOR, root_0, <label>.tree);
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) <endif>root_0 = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <label>.tree, root_0));
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabelAST(...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabelAST(...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabelAST(...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) {<endif>
<label>_tree = (<ASTLabelType>)(ADAPTOR->create(ADAPTOR, <label>));
ADAPTOR->addChild(ADAPTOR, root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( BACKTRACKING==0 ) {<endif>
<label>_tree = (<ASTLabelType>)(ADAPTOR->create(ADAPTOR, <label>));
root_0 = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <label>_tree, root_0));
<if(backtracking)>}<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
<hetero>New(<label>) <! new MethodNode(IDLabel) !>
<else>
(<ASTLabelType>)(ADAPTOR->create(ADAPTOR, <label>))
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp(...)>
<if(backtracking)>
if ( BACKTRACKING==0 ) {<\n>
<endif>
<if(!ruleDescriptor.isSynPred)>
retval.stop = LT(-1);<\n>
<endif>
retval.tree = (<ASTLabelType>)(ADAPTOR->rulePostProcessing(ADAPTOR, root_0));
ADAPTOR->setTokenBoundaries(ADAPTOR, retval.tree, retval.start, retval.stop);
<if(backtracking)>
}
<endif>
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{stream_<it>->free(stream_<it>);}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{stream_<it>->free(stream_<it>);}; separator="\n">
>>

View File

@ -0,0 +1,309 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> _last;<\n>
<ASTLabelType> _first_0;<\n>
>>
/** Add a variable to track last element matched */
ruleInitializations() ::= <<
<super.ruleInitializations()>
_last = NULL;<\n>
_first_0 = NULL;<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(backtracking)>if ( BACKTRACKING ==0 ) {<endif>
<if(rewriteMode)>
retval.tree = (<ASTLabelType>)_first_0;
if ( ADAPTOR->getParent(ADAPTOR, retval.tree) != NULL && ADAPTOR->isNilNode(ADAPTOR, ADAPTOR->getParent(ADAPTOR, retval.tree) ) )
{
retval.tree = (<ASTLabelType>)ADAPTOR->getParent(ADAPTOR, retval.tree);
}
<endif>
<if(backtracking)>}<endif>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = (<ASTLabelType>)LT(1);
{
<ASTLabelType> _save_last_<treeLevel>;
<ASTLabelType> _first_last_<treeLevel>;
<if(!rewriteMode)>
<ASTLabelType> root_<treeLevel>;
<endif>
_save_last_<treeLevel> = _last;
_first_last_<treeLevel> = NULL;
<if(!rewriteMode)>
root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->nilNode(ADAPTOR));
<endif>
<root:element()>
<if(rewriteMode)>
<if(backtracking)>if ( BACKTRACKING ==0 ) {<endif>
<if(root.el.rule)>
if ( _first_<enclosingTreeLevel> == NULL ) _first_<enclosingTreeLevel> = <root.el.label>.tree;
<else>
if ( _first_<enclosingTreeLevel> == NULL ) _first_<enclosingTreeLevel> = <root.el.label>;
<endif>
<if(backtracking)>}<endif>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( LA(1)==ANTLR3_TOKEN_DOWN ) {
MATCHT(ANTLR3_TOKEN_DOWN, NULL);
<children:element()>
MATCHT(ANTLR3_TOKEN_UP, NULL);
}
<else>
MATCHT(ANTLR3_TOKEN_DOWN, NULL);
<children:element()>
MATCHT(ANTLR3_TOKEN_UP, NULL);
<endif>
<if(!rewriteMode)>
ADAPTOR->addChild(ADAPTOR, root_<enclosingTreeLevel>, root_<treeLevel>);
<endif>
_last = _save_last_<treeLevel>;
}<\n>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( BACKTRACKING ==0 ) {<endif>
<if(hetero)>
<label>_tree = <hetero>New(<label>);
<else>
<label>_tree = (<ASTLabelType>)ADAPTOR->dupNode(ADAPTOR, <label>);
<endif>
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<else>
<if(backtracking)>if ( BACKTRACKING ==0 ) {<endif>
if ( _first_<treeLevel> == NULL ) _first_<treeLevel> = <label>;
<if(backtracking)>}<endif>
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( BACKTRACKING == 0 ) {<endif>
<if(hetero)>
<label>_tree = <hetero>New(<label>);
<else>
<label>_tree = (<ASTLabelType>)ADAPTOR->dupNode(ADAPTOR, <label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)ADAPTOR->becomeRoot(ADAPTOR, <label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( BACKTRACKING == 0 ) {<endif>
<if(hetero)>
<label>_tree = <hetero>New(<label>);
<else>
<label>_tree = (<ASTLabelType>)ADAPTOR->dupNode(ADAPTOR, <label>);
<endif><\n>
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( BACKTRACKING == 0 ) {<endif>
<if(hetero)>
<label>_tree = <hetero>New(<label>);
<else>
<label>_tree = (<ASTLabelType>)ADAPTOR->dupNode(ADAPTOR, <label>);
<endif>
root_<treeLevel> = (<ASTLabelType>)ADAPTOR->becomeRoot(ADAPTOR, <label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.ruleRef(...)>
<if(backtracking)>if ( BACKTRACKING == 0 )
{
<endif>
<if(!rewriteMode)>
ADAPTOR->addChild(ADAPTOR, root_<treeLevel>, <label>.tree);
<else>
if ( _first_<treeLevel> == NULL ) _first_<treeLevel> = <label>.tree;
<endif>
<if(backtracking)>}<endif>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<super.listLabelAST(elem=label,...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.ruleRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( BACKTRACKING == 0 ) <endif>root_<treeLevel> = (<ASTLabelType>)(ADAPTOR->becomeRoot(ADAPTOR, <label>.tree, root_<treeLevel>));
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<super.listLabelAST(elem=label,...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.ruleRefTrack(...)>
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.ruleRefTrackAndListLabel(...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.ruleRefRootTrack(...)>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)LT(1);
<super.ruleRefRuleRootTrackAndListLabel(...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
<hetero>New(stream_<token>->nextNode(stream_<token>))
<else>
stream_<token>->nextNode(stream_<token>)
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp(...)>
<if(backtracking)>
if ( BACKTRACKING==0 ) {<\n>
<endif>
<if(!ruleDescriptor.isSynPred)>
retval.stop = LT(-1);<\n>
<endif>
retval.tree = ADAPTOR->rulePostProcessing(ADAPTOR, root_0);
<if(backtracking)>
}
<endif>
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{stream_<it>->free(stream_<it>);}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{stream_<it>->free(stream_<it>);}; separator="\n">
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,250 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to normal C output;
* If ASTs are built, then you'll also get ASTDbg.stg loaded.
*/
group Dbg;
@genericParser.members() ::= <<
<if(grammar.grammarIsRoot)>
const char *
ruleNames[] =
{
"invalidRule", <grammar.allImportedRules:{rST | "<rST.name>"}; wrap="\n ", separator=", ">
};<\n>
<endif>
<if(grammar.grammarIsRoot)> <! grammar imports other grammar(s) !>
static ANTLR3_UINT32 ruleLevel = 0;
static ANTLR3_UINT32 getRuleLevel()
{
return ruleLevel;
}
static void incRuleLevel()
{
ruleLevel++;
}
static void decRuleLevel()
{
ruleLevel--;
}
<else> <! imported grammar !>
static ANTLR3_UINT32
getRuleLevel()
{
return <grammar.delegators:{g| <g:delegateName()>}>->getRuleLevel();
}
static void incRuleLevel()
{
<grammar.delegators:{g| <g:delegateName()>}>->incRuleLevel();
}
static void
decRuleLevel()
{
<grammar.delegators:{g| <g:delegateName()>}>.decRuleLevel();
}
<endif>
<if(profile)>
// Profiling not yet implemented for C target
//
<endif>
<if(grammar.grammarIsRoot)>
<ctorForPredefinedListener()>
<else>
<ctorForDelegateGrammar()>
<endif>
static ANTLR3_BOOLEAN
evalPredicate(p<name> ctx, ANTLR3_BOOLEAN result, const char * predicate)
{
DBG->semanticPredicate(DBG, result, predicate);
return result;
}<\n>
>>
@genericParser.debugStuff() ::= <<
<if(grammar.grammarIsRoot)>
<createListenerAndHandshake()>
<endif>
>>
ctorForProfilingRootGrammar() ::= <<
>>
/** Basically we don't want to set any dbg listeners as root will have it. */
ctorForDelegateGrammar() ::= <<
//public <name>(<inputStreamType> input, DebugEventListener dbg, RecognizerSharedState state<grammar.delegators:{g|, <g.recognizerName> <g:delegateName()>}>) {
// super(input, dbg, state);
//parserCtorBody()
// <grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, this, state<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
//}
>>
ctorForPredefinedListener() ::= <<
//public <name>(<inputStreamType> input, DebugEventListener dbg) {
// super(input, dbg, new RecognizerSharedState());
//<if(profile)>
// Profiler p = (Profiler)dbg;
// p.setParser(this);
//<endif>
// //parserCtorBody()
// <grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
//}<\n>
>>
createListenerAndHandshake() ::= <<
{
// DEBUG MODE code
//
pANTLR3_DEBUG_EVENT_LISTENER proxy;
proxy = antlr3DebugListenerNew();
proxy->grammarFileName = INPUT->tokenSource->strFactory->newStr8(INPUT->tokenSource->strFactory, (pANTLR3_UINT8)ctx->getGrammarFileName());
<if(TREE_PARSER)>
proxy->adaptor = ADAPTOR;
<endif>
PARSER->setDebugListener(PARSER, proxy);
// Try to connect to the debugger (waits forever for a connection)
//
proxy->handshake(proxy);
// End DEBUG MODE code
//
}
>>
@rule.preamble() ::= <<
DBG->enterRule(DBG, getGrammarFileName(), (const char *)"<ruleName>");
if ( getRuleLevel()==0 )
{
DBG->commence(DBG);
}
incRuleLevel();
DBG->location(DBG, <ruleDescriptor.tree.line>, <ruleDescriptor.tree.column>);<\n>
>>
@rule.postamble() ::= <<
DBG->location(DBG, <ruleDescriptor.EORNode.line>, <ruleDescriptor.EORNode.column>);<\n>
DBG->exitRule(DBG, getGrammarFileName(), (const char *)"<ruleName>");
decRuleLevel();
if ( getRuleLevel()==0 )
{
DBG->terminate(DBG);
}
<\n>
>>
@synpred.start() ::= "DBG->beginBacktrack(DBG, BACKTRACKING);"
@synpred.stop() ::= "DBG->endBacktrack(DBG, BACKTRACKING, success);"
// Common debug event triggers used by region overrides below
enterSubRule() ::=
"DBG->enterSubRule(DBG, <decisionNumber>);<\n>"
exitSubRule() ::=
"DBG->exitSubRule(DBG, <decisionNumber>);<\n>"
enterDecision() ::=
"DBG->enterDecision(DBG, <decisionNumber>);<\n>"
exitDecision() ::=
"DBG->exitDecision(DBG, <decisionNumber>);<\n>"
enterAlt(n) ::= "DBG->enterAlt(DBG, <n>);<\n>"
// Region overrides that tell various constructs to add debugging triggers
@block.predecision() ::= "<enterSubRule()><enterDecision()>"
@block.postdecision() ::= "<exitDecision()>"
@block.postbranch() ::= "<exitSubRule()>"
@ruleBlock.predecision() ::= "<enterDecision()>"
@ruleBlock.postdecision() ::= "<exitDecision()>"
@ruleBlockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@blockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@positiveClosureBlock.preloop() ::= "<enterSubRule()>"
@positiveClosureBlock.postloop() ::= "<exitSubRule()>"
@positiveClosureBlock.predecision() ::= "<enterDecision()>"
@positiveClosureBlock.postdecision() ::= "<exitDecision()>"
@positiveClosureBlock.earlyExitException() ::=
"DBG->recognitionException(DBG, EXCEPTION);<\n>"
@closureBlock.preloop() ::= "<enterSubRule()>"
@closureBlock.postloop() ::= "<exitSubRule()>"
@closureBlock.predecision() ::= "<enterDecision()>"
@closureBlock.postdecision() ::= "<exitDecision()>"
@altSwitchCase.prealt() ::= "<enterAlt(n=i)>"
@element.prematch() ::=
"DBG->location(DBG, <it.line>, <it.pos>);"
@matchSet.mismatchedSetException() ::=
"DBG->recognitionException(DBG, EXCEPTION);"
@newNVException.noViableAltException() ::= "DBG->recognitionException(DBG, EXCEPTION);"
dfaDecision(decisionNumber,description) ::= <<
alt<decisionNumber> = cdfa<decisionNumber>.predict(ctx, RECOGNIZER, ISTREAM, &cdfa<decisionNumber>);
if (HASEXCEPTION())
{
DBG->recognitionException(DBG, EXCEPTION);
goto rule<ruleDescriptor.name>Ex;
}
<checkRuleBacktrackFailure()>
>>
@cyclicDFA.errorMethod() ::= <<
//static void
//dfaError(p<name> ctx)
//{
// DBG->recognitionException(DBG, EXCEPTION);
//}
>>
/** Force predicate validation to trigger an event */
evalPredicate(pred,description) ::= <<
evalPredicate(ctx, <pred>, (const char *)"<description>")
>>

View File

@ -0,0 +1,403 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
@outputFile.imports() ::= <<
<@super.imports()>
<if(!TREE_PARSER)><! tree parser would already have imported !>
using Antlr.Runtime.Tree;<\n>
<endif>
>>
@genericParser.members() ::= <<
<@super.members()>
<parserMembers()>
>>
/** Add an adaptor property that knows how to build trees */
parserMembers() ::= <<
protected ITreeAdaptor adaptor = new CommonTreeAdaptor();<\n>
public ITreeAdaptor TreeAdaptor
{
get { return this.adaptor; }
set {
this.adaptor = value;
<grammar.directDelegates:{g|<g:delegateName()>.TreeAdaptor = this.adaptor;}>
}
}
>>
@returnScope.ruleReturnMembers() ::= <<
private <ASTLabelType> tree;
override public object Tree
{
get { return tree; }
set { tree = (<ASTLabelType>) value; }
}
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> root_0 = null;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{<ASTLabelType> <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.tokenListLabels:{<ASTLabelType> <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{RewriteRule<rewriteElementType>Stream stream_<it> = new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>");}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{RewriteRuleSubtreeStream stream_<it> = new RewriteRuleSubtreeStream(adaptor,"rule <it>");}; separator="\n">
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = (<ASTLabelType>)adaptor.GetNilNode();<\n>
<endif>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.Add(<label>);<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.Add(<label>);<\n>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule.name>.Add(<label>.Tree);
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule>.Add(<label>.Tree);
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem=label+".Tree",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
// AST REWRITE
// elements: <referencedElementsDeep; separator=", ">
// token labels: <referencedTokenLabels; separator=", ">
// rule labels: <referencedRuleLabels; separator=", ">
// token list labels: <referencedTokenListLabels; separator=", ">
// rule list labels: <referencedRuleListLabels; separator=", ">
<if(backtracking)>
if ( state.backtracking==0 ) {<\n>
<endif>
<prevRuleRootRef()>.Tree = root_0;
<rewriteCodeLabels()>
root_0 = (<ASTLabelType>)adaptor.GetNilNode();
<alts:rewriteAlt(); separator="else ">
<! if tree parser and rewrite=true !>
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.Tree = (<ASTLabelType>)adaptor.rulePostProcessing(root_0);
input.ReplaceChildren(adaptor.GetParent(retval.Start),
adaptor.GetChildIndex(retval.Start),
adaptor.GetChildIndex(_last),
retval.Tree);
<endif>
<endif>
<! if parser or rewrite!=true, we need to set result !>
<if(!TREE_PARSER)>
<prevRuleRootRef()>.Tree = root_0;
<endif>
<if(!rewriteMode)>
<prevRuleRootRef()>.Tree = root_0;
<endif>
<if(backtracking)>
}
<endif>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{RewriteRule<rewriteElementType>Stream stream_<it> = new RewriteRule<rewriteElementType>Stream(adaptor, "token <it>", <it>);};
separator="\n"
>
<referencedTokenListLabels
:{RewriteRule<rewriteElementType>Stream stream_<it> = new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>", list_<it>);};
separator="\n"
>
<referencedRuleLabels
:{RewriteRuleSubtreeStream stream_<it> = new RewriteRuleSubtreeStream(adaptor, "token <it>", (<it>!=null ? <it>.Tree : null));};
separator="\n"
>
<referencedRuleListLabels
:{RewriteRuleSubtreeStream stream_<it> = new RewriteRuleSubtreeStream(adaptor, "token <it>", list_<it>);};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
if ( <referencedElementsDeep:{el | stream_<el>.HasNext()}; separator=" || "> )
{
<alt>
}
<referencedElementsDeep:{el | stream_<el>.Reset();<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
while ( <referencedElements:{el | stream_<el>.HasNext()}; separator=" || "> )
{
<alt>
}
<referencedElements:{el | stream_<el>.Reset();<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
if ( !(<referencedElements:{el | stream_<el>.HasNext()}; separator=" || ">) ) {
throw new RewriteEarlyExitException();
}
while ( <referencedElements:{el | stream_<el>.HasNext()}; separator=" || "> )
{
<alt>
}
<referencedElements:{el | stream_<el>.Reset();<\n>}>
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>)
{
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = null;"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)adaptor.GetNilNode();
<root:rewriteElement()>
<children:rewriteElement()>
adaptor.AddChild(root_<enclosingTreeLevel>, root_<treeLevel>);
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
adaptor.AddChild(root_<treeLevel>, <createRewriteNodeFromElement(...)>);<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextNode());<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextNode());<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<label>.NextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<createRewriteNodeFromElement(...)>, root_<treeLevel>);<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
adaptor.AddChild(root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>);<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>);<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.Tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<rule>.NextTree());<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<rule>.NextNode(), root_<treeLevel>);<\n>
>>
rewriteNodeAction(action) ::= <<
adaptor.AddChild(root_<treeLevel>, <action>);<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<action>, root_<treeLevel>);<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextTree());<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextTree());<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<label>.NextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<label>.NextNode(), root_<treeLevel>);<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
new <hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
(<ASTLabelType>)adaptor.Create(<tokenType>, <args; separator=", "><if(!args)>"<tokenType>"<endif>)
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.NextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
adaptor.Create(<token>, <args; separator=", ">)
<else>
stream_<token>.NextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,97 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to AST stuff. Dynamic inheritance
* hierarchy is set up as ASTDbg : AST : Dbg : C# by code generator.
*/
group ASTDbg;
parserMembers() ::= <<
protected DebugTreeAdaptor adaptor;
public ITreeAdaptor TreeAdaptor
{
get {
<if(grammar.grammarIsRoot)>
return this.adaptor;
<else>
this.adaptor = (DebugTreeAdaptor)adaptor; // delegator sends dbg adaptor
<endif><\n>
<grammar.directDelegates:{g|<g:delegateName()>.TreeAdaptor = this.adaptor;}>
}
set { this.adaptor = new DebugTreeAdaptor(dbg, value); }
}<\n>
>>
parserCtorBody() ::= <<
<super.parserCtorBody()>
>>
createListenerAndHandshake() ::= <<
DebugEventSocketProxy proxy = new DebugEventSocketProxy(this, port, adaptor);
DebugListener = proxy;
<!
Original line follows, replaced by the next two ifs:
set<inputStreamType>(new Debug<inputStreamType>(input,proxy));
!>
<if(PARSER)>
TokenStream = new DebugTokenStream(input,proxy);<\n>
<endif>
<if(TREE_PARSER)>
TokenStream = new DebugTreeNodeStream(input,proxy);<\n>
<endif>
try {
proxy.Handshake();
} catch (IOException ioe) {
ReportError(ioe);
}
>>
@ctorForRootGrammar.finally() ::= <<
ITreeAdaptor adap = new CommonTreeAdaptor();
TreeAdaptor = adap;
proxy.TreeAdaptor = adap;
>>
@ctorForProfilingRootGrammar.finally() ::=<<
ITreeAdaptor adap = new CommonTreeAdaptor();
TreeAdaptor = adap;
proxy.TreeAdaptor = adap;
>>
@ctorForPredefinedListener.superClassRef() ::= "base(input, dbg)"
@ctorForPredefinedListener.finally() ::=<<
<if(grammar.grammarIsRoot)> <! don't create new adaptor for delegates !>
ITreeAdaptor adap = new CommonTreeAdaptor();
TreeAdaptor = adap;<\n>
<endif>
>>
@rewriteElement.pregen() ::= "dbg.Location(<e.line>,<e.pos>);"

View File

@ -0,0 +1,220 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
@rule.setErrorReturnValue() ::= <<
// Conversion of the second argument necessary, but harmless
retval.Tree = (<ASTLabelType>)adaptor.ErrorNode(input, (IToken) retval.Start, input.LT(-1), re);
<! System.Console.WriteLine("<ruleName> returns " + ((CommonTree)retval.Tree).ToStringTree()); !>
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = <createNodeFromToken(...)>;
adaptor.AddChild(root_0, <label>_tree);
<if(backtracking)>
}
<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = <createNodeFromToken(...)>;
root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_0);
<if(backtracking)>
}
<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking == 0 ) <endif>adaptor.AddChild(root_0, <createNodeFromToken(...)>);})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label>=(<labelType>)input.LT(1);<\n>
<endif>
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking == 0 ) <endif>root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<createNodeFromToken(...)>, root_0);})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking == 0 ) <endif>adaptor.AddChild(root_0, <label>.Tree);
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking == 0 ) <endif>root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<label>.Tree, root_0);
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".Tree",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = (<ASTLabelType>)adaptor.Create(<label>);
adaptor.AddChild(root_0, <label>_tree);
<if(backtracking)>
}
<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = (<ASTLabelType>)adaptor.Create(<label>);
root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_0);
<if(backtracking)>
}
<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
new <hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
(<ASTLabelType>)adaptor.Create(<label>)
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(backtracking)>
if ( state.backtracking==0 )
{
<endif>
retval.Tree = (<ASTLabelType>)adaptor.RulePostProcessing(root_0);
<if(!TREE_PARSER)>
adaptor.SetTokenBoundaries(retval.Tree, (IToken) retval.Start, (IToken) retval.Stop);
<endif>
<if(backtracking)>
}
<endif>
>>

View File

@ -0,0 +1,299 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> _first_0 = null;
<ASTLabelType> _last = null;<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(rewriteMode)>
retval.Tree = (<ASTLabelType>)_first_0;
if ( adaptor.GetParent(retval.Tree)!=null && adaptor.IsNil( adaptor.GetParent(retval.Tree) ) )
retval.Tree = (<ASTLabelType>)adaptor.GetParent(retval.Tree);
<endif>
<if(backtracking)>}<endif>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = (<ASTLabelType>)input.LT(1);
{
<ASTLabelType> _save_last_<treeLevel> = _last;
<ASTLabelType> _first_<treeLevel> = null;
<if(!rewriteMode)>
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)adaptor.GetNilNode();
<endif>
<root:element()>
<if(rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 )<endif>
<if(root.el.rule)>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>.Tree;
<else>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>;
<endif>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( input.LA(1) == Token.DOWN )
{
Match(input, Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
Match(input, Token.UP, null); <checkRuleBacktrackFailure()>
}
<else>
Match(input, Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
Match(input, Token.UP, null); <checkRuleBacktrackFailure()>
<endif>
<if(!rewriteMode)>
adaptor.AddChild(root_<enclosingTreeLevel>, root_<treeLevel>);
<endif>
_last = _save_last_<treeLevel>;
}<\n>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
adaptor.AddChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>
}
<endif>
<else> <! rewrite mode !>
<if(backtracking)>if ( state.backtracking==0 )<endif>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>;
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>
}
<endif>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
adaptor.AddChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>
<if(!rewriteMode)>
adaptor.AddChild(root_<treeLevel>, <label>.Tree);
<else> <! rewrite mode !>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>.Tree;
<endif>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking == 0 ) <endif>root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<label>.Tree, root_<treeLevel>);
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefTrack(...)>
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefTrackAndListLabel(...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefRootTrack(...)>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefRuleRootTrackAndListLabel(...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.NextNode())
<else>
stream_<token>.NextNode()
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(!rewriteMode)>
<if(backtracking)>
if ( state.backtracking==0 )
{
<endif>
retval.Tree = (<ASTLabelType>)adaptor.RulePostProcessing(root_0);
<if(backtracking)>
}
<endif>
<endif>
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,288 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to normal Java output;
* If ASTs are built, then you'll also get ASTDbg.stg loaded.
*/
group Dbg;
@outputFile.imports() ::= <<
<@super.imports()>
using Antlr.Runtime.Debug;
using IOException = System.IO.IOException;
>>
@genericParser.members() ::= <<
<if(grammar.grammarIsRoot)>
public static readonly string[] ruleNames = new string[] {
"invalidRule", <grammar.allImportedRules:{rST | "<rST.name>"}; wrap="\n ", separator=", ">
};<\n>
<endif>
<if(grammar.grammarIsRoot)> <! grammar imports other grammar(s) !>
private int ruleLevel = 0;
public int RuleLevel {
get { return ruleLevel; }
}
public void IncRuleLevel() { ruleLevel++; }
public void DecRuleLevel() { ruleLevel--; }
<if(profile)>
<ctorForProfilingRootGrammar()>
<else>
<ctorForRootGrammar()>
<endif>
<ctorForPredefinedListener()>
<else> <! imported grammar !>
public int RuleLevel {
get { return <grammar.delegators:{g| <g:delegateName()>}>.RuleLevel; }
}
public void IncRuleLevel() { <grammar.delegators:{g| <g:delegateName()>}>.IncRuleLevel(); }
public void DecRuleLevel() { <grammar.delegators:{g| <g:delegateName()>}>.DecRuleLevel(); }
<ctorForDelegateGrammar()>
<endif>
<if(profile)>
override public bool AlreadyParsedRule(IIntStream input, int ruleIndex)
{
((Profiler)dbg).ExamineRuleMemoization(input, ruleIndex, <grammar.composite.rootGrammar.recognizerName>.ruleNames[ruleIndex]);
return base.AlreadyParsedRule(input, ruleIndex);
}<\n>
override public void Memoize(IIntStream input,
int ruleIndex,
int ruleStartIndex)
{
((Profiler)dbg).Memoize(input, ruleIndex, ruleStartIndex, <grammar.composite.rootGrammar.recognizerName>.ruleNames[ruleIndex]);
base.Memoize(input, ruleIndex, ruleStartIndex);
}<\n>
<endif>
protected bool EvalPredicate(bool result, string predicate)
{
dbg.SemanticPredicate(result, predicate);
return result;
}<\n>
>>
ctorForRootGrammar() ::= <<
<! bug: can't use <@super.members()> cut-n-paste instead !>
<! Same except we add port number and profile stuff if root grammar !>
public <name>(<inputStreamType> input)
: this(input, DebugEventSocketProxy.DEFAULT_DEBUGGER_PORT, new RecognizerSharedState()) {
}
public <name>(<inputStreamType> input, int port, RecognizerSharedState state)
: base(input, state) {
<parserCtorBody()>
<createListenerAndHandshake()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}<\n>
>>
ctorForProfilingRootGrammar() ::= <<
<! bug: can't use <@super.members()> cut-n-paste instead !>
public <name>(<inputStreamType> input) {
this(input, new Profiler(null), new RecognizerSharedState());
}
public <name>(<inputStreamType> input, IDebugEventListener dbg, RecognizerSharedState state)
: base(input, dbg, state) {
Profiler p = (Profiler)dbg;
p.setParser(this);
<parserCtorBody()>
<grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}
<\n>
>>
/** Basically we don't want to set any dbg listeners are root will have it. */
ctorForDelegateGrammar() ::= <<
public <name>(<inputStreamType> input, IDebugEventListener dbg, RecognizerSharedState state<grammar.delegators:{g|, <g.recognizerName> <g:delegateName()>}>)
: base(input, dbg, state) {
<parserCtorBody()>
<grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, this, this.state<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
}<\n>
>>
ctorForPredefinedListener() ::= <<
public <name>(<inputStreamType> input, IDebugEventListener dbg)
: <@superClassRef>base(input, dbg, new RecognizerSharedState())<@end> {
<if(profile)>
Profiler p = (Profiler)dbg;
p.setParser(this);
<endif>
<parserCtorBody()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}<\n>
>>
createListenerAndHandshake() ::= <<
<if(TREE_PARSER)>
DebugEventSocketProxy proxy = new DebugEventSocketProxy(this, port, input.TreeAdaptor);
<else>
DebugEventSocketProxy proxy = new DebugEventSocketProxy(this, port, null);
<endif>
DebugListener = proxy;
try
{
proxy.Handshake();
}
catch (IOException ioe)
{
ReportError(ioe);
}
>>
@genericParser.superClassName() ::= "Debug<@super.superClassName()>"
@rule.preamble() ::= <<
try {
dbg.EnterRule(GrammarFileName, "<ruleName>");
if ( RuleLevel==0 ) {dbg.Commence();}
IncRuleLevel();
dbg.Location(<ruleDescriptor.tree.line>, <ruleDescriptor.tree.column>);<\n>
>>
@lexer.debugAddition() ::= ", dbg"
@rule.postamble() ::= <<
dbg.Location(<ruleDescriptor.EORNode.line>, <ruleDescriptor.EORNode.column>);<\n>
}
finally {
dbg.ExitRule(GrammarFileName, "<ruleName>");
DecRuleLevel();
if ( RuleLevel==0 ) {dbg.Terminate();}
}<\n>
>>
@synpred.start() ::= "dbg.BeginBacktrack(state.backtracking);"
@synpred.stop() ::= "dbg.EndBacktrack(state.backtracking, success);"
// Common debug event triggers used by region overrides below
enterSubRule() ::=
"try { dbg.EnterSubRule(<decisionNumber>);<\n>"
exitSubRule() ::=
"} finally { dbg.ExitSubRule(<decisionNumber>); }<\n>"
enterDecision() ::=
"try { dbg.EnterDecision(<decisionNumber>);<\n>"
exitDecision() ::=
"} finally { dbg.ExitDecision(<decisionNumber>); }<\n>"
enterAlt(n) ::= "dbg.EnterAlt(<n>);<\n>"
// Region overrides that tell various constructs to add debugging triggers
@block.predecision() ::= "<enterSubRule()><enterDecision()>"
@block.postdecision() ::= "<exitDecision()>"
@block.postbranch() ::= "<exitSubRule()>"
@ruleBlock.predecision() ::= "<enterDecision()>"
@ruleBlock.postdecision() ::= "<exitDecision()>"
@ruleBlockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@blockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@positiveClosureBlock.preloop() ::= "<enterSubRule()>"
@positiveClosureBlock.postloop() ::= "<exitSubRule()>"
@positiveClosureBlock.predecision() ::= "<enterDecision()>"
@positiveClosureBlock.postdecision() ::= "<exitDecision()>"
@positiveClosureBlock.earlyExitException() ::=
"dbg.RecognitionException(eee);<\n>"
@closureBlock.preloop() ::= "<enterSubRule()>"
@closureBlock.postloop() ::= "<exitSubRule()>"
@closureBlock.predecision() ::= "<enterDecision()>"
@closureBlock.postdecision() ::= "<exitDecision()>"
@altSwitchCase.prealt() ::= "<enterAlt(n=i)>"
@element.prematch() ::=
"dbg.Location(<it.line>,<it.pos>);"
@matchSet.mismatchedSetException() ::=
"dbg.RecognitionException(mse);"
@dfaState.noViableAltException() ::= "dbg.RecognitionException(nvae_d<decisionNumber>s<stateNumber>);"
@dfaStateSwitch.noViableAltException() ::= "dbg.RecognitionException(nvae_d<decisionNumber>s<stateNumber>);"
dfaDecision(decisionNumber,description) ::= <<
try
{
isCyclicDecision = true;
<super.dfaDecision(...)>
}
catch (NoViableAltException nvae)
{
dbg.RecognitionException(nvae);
throw nvae;
}
>>
@cyclicDFA.dbgCtor() ::= <<
public DFA<dfa.decisionNumber>(BaseRecognizer recognizer, IDebugEventListener dbg) : this(recognizer)
{
this.dbg = dbg;
}
>>
@cyclicDFA.debugMember() ::= <<
IDebugEventListener dbg;
>>
@cyclicDFA.errorMethod() ::= <<
public override void Error(NoViableAltException nvae)
{
dbg.RecognitionException(nvae);
}
>>
/** Force predicate validation to trigger an event */
evalPredicate(pred,description) ::= <<
EvalPredicate(<pred>,"<description>")
>>

View File

@ -0,0 +1,173 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template subgroup to add template rewrite output
* If debugging, then you'll also get STDbg.stg loaded.
*/
group ST;
@outputFile.imports() ::= <<
<@super.imports()>
using Antlr.StringTemplate;
using Antlr.StringTemplate.Language;
<if(!backtracking)>
using Hashtable = System.Collections.Hashtable;
<endif>
>>
/** Add this to each rule's return value struct */
@returnScope.ruleReturnMembers() ::= <<
private StringTemplate st;
public StringTemplate ST { get { return st; } set { st = value; } }
public override object Template { get { return st; } }
public override string ToString() { return (st == null) ? null : st.ToString(); }
>>
@genericParser.members() ::= <<
<@super.members()>
protected StringTemplateGroup templateLib =
new StringTemplateGroup("<name>Templates", typeof(AngleBracketTemplateLexer));
public StringTemplateGroup TemplateLib
{
get { return this.templateLib; }
set { this.templateLib = value; }
}
/// \<summary> Allows convenient multi-value initialization:
/// "new STAttrMap().Add(...).Add(...)"
/// \</summary>
protected class STAttrMap : Hashtable
{
public STAttrMap Add(string attrName, object value)
{
base.Add(attrName, value);
return this;
}
public STAttrMap Add(string attrName, int value)
{
base.Add(attrName, value);
return this;
}
}
>>
/** x+=rule when output=template */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".Template",...)>
>>
rewriteTemplate(alts) ::= <<
// TEMPLATE REWRITE
<if(backtracking)>
if ( state.backtracking==0 )
{
<alts:rewriteTemplateAlt(); separator="else ">
<if(rewriteMode)><replaceTextInLine()><endif>
}
<else>
<alts:rewriteTemplateAlt(); separator="else ">
<if(rewriteMode)><replaceTextInLine()><endif>
<endif>
>>
replaceTextInLine() ::= <<
<if(TREE_PARSER)>
((TokenRewriteStream)input.TokenStream).Replace(
input.TreeAdaptor.GetTokenStartIndex(retval.Start),
input.TreeAdaptor.GetTokenStopIndex(retval.Start),
retval.ST);
<else>
((TokenRewriteStream)input).Replace(
((IToken)retval.Start).TokenIndex,
input.LT(-1).TokenIndex,
retval.ST);
<endif>
>>
rewriteTemplateAlt() ::= <<
// <it.description>
<if(it.pred)>
if (<it.pred>) {
retval.ST = <it.alt>;
}<\n>
<else>
{
retval.ST = <it.alt>;
}<\n>
<endif>
>>
rewriteEmptyTemplate(alts) ::= <<
null;
>>
/** Invoke a template with a set of attribute name/value pairs.
* Set the value of the rule's template *after* having set
* the attributes because the rule's template might be used as
* an attribute to build a bigger template; you get a self-embedded
* template.
*/
rewriteExternalTemplate(name,args) ::= <<
templateLib.GetInstanceOf("<name>"<if(args)>,
new STAttrMap()<args:{a | .Add("<a.name>", <a.value>)}>
<endif>)
>>
/** expr is a string expression that says what template to load */
rewriteIndirectTemplate(expr,args) ::= <<
templateLib.GetInstanceOf(<expr><if(args)>,
new STAttrMap()<args:{a | .Add("<a.name>", <a.value>)}>
<endif>)
>>
/** Invoke an inline template with a set of attribute name/value pairs */
rewriteInlineTemplate(args, template) ::= <<
new StringTemplate(templateLib, "<template>"<if(args)>,
new STAttrMap()<args:{a | .Add("<a.name>", <a.value>)}>
<endif>)
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
<action>
>>
/** An action has %st.attrName=expr; or %{st}.attrName=expr; */
actionSetAttribute(st,attrName,expr) ::= <<
(<st>).SetAttribute("<attrName>",<expr>);
>>
/** Translate %{stringExpr} */
actionStringConstructor(stringExpr) ::= <<
new StringTemplate(templateLib,<stringExpr>)
>>

View File

@ -0,0 +1,403 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
@outputFile.imports() ::= <<
<@super.imports()>
<if(!TREE_PARSER)><! tree parser would already have imported !>
using Antlr.Runtime.Tree;<\n>
<endif>
>>
@genericParser.members() ::= <<
<@super.members()>
<parserMembers()>
>>
/** Add an adaptor property that knows how to build trees */
parserMembers() ::= <<
protected ITreeAdaptor adaptor = new CommonTreeAdaptor();<\n>
public ITreeAdaptor TreeAdaptor
{
get { return this.adaptor; }
set {
this.adaptor = value;
<grammar.directDelegates:{g|<g:delegateName()>.TreeAdaptor = this.adaptor;}>
}
}
>>
@returnScope.ruleReturnMembers() ::= <<
private <ASTLabelType> tree;
override public object Tree
{
get { return tree; }
set { tree = (<ASTLabelType>) value; }
}
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> root_0 = null;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{<ASTLabelType> <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.tokenListLabels:{<ASTLabelType> <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{RewriteRule<rewriteElementType>Stream stream_<it> = new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>");}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{RewriteRuleSubtreeStream stream_<it> = new RewriteRuleSubtreeStream(adaptor,"rule <it>");}; separator="\n">
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = (<ASTLabelType>)adaptor.GetNilNode();<\n>
<endif>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.Add(<label>);<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.Add(<label>);<\n>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule.name>.Add(<label>.Tree);
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule>.Add(<label>.Tree);
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem=label+".Tree",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
// AST REWRITE
// elements: <referencedElementsDeep; separator=", ">
// token labels: <referencedTokenLabels; separator=", ">
// rule labels: <referencedRuleLabels; separator=", ">
// token list labels: <referencedTokenListLabels; separator=", ">
// rule list labels: <referencedRuleListLabels; separator=", ">
<if(backtracking)>
if ( state.backtracking==0 ) {<\n>
<endif>
<prevRuleRootRef()>.Tree = root_0;
<rewriteCodeLabels()>
root_0 = (<ASTLabelType>)adaptor.GetNilNode();
<alts:rewriteAlt(); separator="else ">
<! if tree parser and rewrite=true !>
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.Tree = (<ASTLabelType>)adaptor.rulePostProcessing(root_0);
input.ReplaceChildren(adaptor.GetParent(retval.Start),
adaptor.GetChildIndex(retval.Start),
adaptor.GetChildIndex(_last),
retval.Tree);
<endif>
<endif>
<! if parser or rewrite!=true, we need to set result !>
<if(!TREE_PARSER)>
<prevRuleRootRef()>.Tree = root_0;
<endif>
<if(!rewriteMode)>
<prevRuleRootRef()>.Tree = root_0;
<endif>
<if(backtracking)>
}
<endif>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{RewriteRule<rewriteElementType>Stream stream_<it> = new RewriteRule<rewriteElementType>Stream(adaptor, "token <it>", <it>);};
separator="\n"
>
<referencedTokenListLabels
:{RewriteRule<rewriteElementType>Stream stream_<it> = new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>", list_<it>);};
separator="\n"
>
<referencedRuleLabels
:{RewriteRuleSubtreeStream stream_<it> = new RewriteRuleSubtreeStream(adaptor, "token <it>", (<it>!=null ? <it>.Tree : null));};
separator="\n"
>
<referencedRuleListLabels
:{RewriteRuleSubtreeStream stream_<it> = new RewriteRuleSubtreeStream(adaptor, "token <it>", list_<it>);};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
if ( <referencedElementsDeep:{el | stream_<el>.HasNext()}; separator=" || "> )
{
<alt>
}
<referencedElementsDeep:{el | stream_<el>.Reset();<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
while ( <referencedElements:{el | stream_<el>.HasNext()}; separator=" || "> )
{
<alt>
}
<referencedElements:{el | stream_<el>.Reset();<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
if ( !(<referencedElements:{el | stream_<el>.HasNext()}; separator=" || ">) ) {
throw new RewriteEarlyExitException();
}
while ( <referencedElements:{el | stream_<el>.HasNext()}; separator=" || "> )
{
<alt>
}
<referencedElements:{el | stream_<el>.Reset();<\n>}>
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>)
{
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = null;"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)adaptor.GetNilNode();
<root:rewriteElement()>
<children:rewriteElement()>
adaptor.AddChild(root_<enclosingTreeLevel>, root_<treeLevel>);
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
adaptor.AddChild(root_<treeLevel>, <createRewriteNodeFromElement(...)>);<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextNode());<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextNode());<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<label>.NextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<createRewriteNodeFromElement(...)>, root_<treeLevel>);<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
adaptor.AddChild(root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>);<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>);<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.Tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<rule>.NextTree());<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<rule>.NextNode(), root_<treeLevel>);<\n>
>>
rewriteNodeAction(action) ::= <<
adaptor.AddChild(root_<treeLevel>, <action>);<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<action>, root_<treeLevel>);<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextTree());<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
adaptor.AddChild(root_<treeLevel>, stream_<label>.NextTree());<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<label>.NextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(stream_<label>.NextNode(), root_<treeLevel>);<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
new <hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
(<ASTLabelType>)adaptor.Create(<tokenType>, <args; separator=", "><if(!args)>"<tokenType>"<endif>)
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.NextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
adaptor.Create(<token>, <args; separator=", ">)
<else>
stream_<token>.NextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,97 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to AST stuff. Dynamic inheritance
* hierarchy is set up as ASTDbg : AST : Dbg : C# by code generator.
*/
group ASTDbg;
parserMembers() ::= <<
protected DebugTreeAdaptor adaptor;
public ITreeAdaptor TreeAdaptor
{
get {
<if(grammar.grammarIsRoot)>
return this.adaptor;
<else>
this.adaptor = (DebugTreeAdaptor)adaptor; // delegator sends dbg adaptor
<endif><\n>
<grammar.directDelegates:{g|<g:delegateName()>.TreeAdaptor = this.adaptor;}>
}
set { this.adaptor = new DebugTreeAdaptor(dbg, value); }
}<\n>
>>
parserCtorBody() ::= <<
<super.parserCtorBody()>
>>
createListenerAndHandshake() ::= <<
DebugEventSocketProxy proxy = new DebugEventSocketProxy(this, port, adaptor);
DebugListener = proxy;
<!
Original line follows, replaced by the next two ifs:
set<inputStreamType>(new Debug<inputStreamType>(input,proxy));
!>
<if(PARSER)>
TokenStream = new DebugTokenStream(input,proxy);<\n>
<endif>
<if(TREE_PARSER)>
TokenStream = new DebugTreeNodeStream(input,proxy);<\n>
<endif>
try {
proxy.Handshake();
} catch (IOException ioe) {
ReportError(ioe);
}
>>
@ctorForRootGrammar.finally() ::= <<
ITreeAdaptor adap = new CommonTreeAdaptor();
TreeAdaptor = adap;
proxy.TreeAdaptor = adap;
>>
@ctorForProfilingRootGrammar.finally() ::=<<
ITreeAdaptor adap = new CommonTreeAdaptor();
TreeAdaptor = adap;
proxy.TreeAdaptor = adap;
>>
@ctorForPredefinedListener.superClassRef() ::= "base(input, dbg)"
@ctorForPredefinedListener.finally() ::=<<
<if(grammar.grammarIsRoot)> <! don't create new adaptor for delegates !>
ITreeAdaptor adap = new CommonTreeAdaptor();
TreeAdaptor = adap;<\n>
<endif>
>>
@rewriteElement.pregen() ::= "dbg.Location(<e.line>,<e.pos>);"

View File

@ -0,0 +1,220 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
@rule.setErrorReturnValue() ::= <<
// Conversion of the second argument necessary, but harmless
retval.Tree = (<ASTLabelType>)adaptor.ErrorNode(input, (IToken) retval.Start, input.LT(-1), re);
<! System.Console.WriteLine("<ruleName> returns " + ((CommonTree)retval.Tree).ToStringTree()); !>
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = <createNodeFromToken(...)>;
adaptor.AddChild(root_0, <label>_tree);
<if(backtracking)>
}
<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = <createNodeFromToken(...)>;
root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_0);
<if(backtracking)>
}
<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking == 0 ) <endif>adaptor.AddChild(root_0, <createNodeFromToken(...)>);})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label>=(<labelType>)input.LT(1);<\n>
<endif>
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking == 0 ) <endif>root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<createNodeFromToken(...)>, root_0);})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking == 0 ) <endif>adaptor.AddChild(root_0, <label>.Tree);
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking == 0 ) <endif>root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<label>.Tree, root_0);
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".Tree",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = (<ASTLabelType>)adaptor.Create(<label>);
adaptor.AddChild(root_0, <label>_tree);
<if(backtracking)>
}
<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<label>_tree = (<ASTLabelType>)adaptor.Create(<label>);
root_0 = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_0);
<if(backtracking)>
}
<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
new <hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
(<ASTLabelType>)adaptor.Create(<label>)
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(backtracking)>
if ( state.backtracking==0 )
{
<endif>
retval.Tree = (<ASTLabelType>)adaptor.RulePostProcessing(root_0);
<if(!TREE_PARSER)>
adaptor.SetTokenBoundaries(retval.Tree, (IToken) retval.Start, (IToken) retval.Stop);
<endif>
<if(backtracking)>
}
<endif>
>>

View File

@ -0,0 +1,299 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> _first_0 = null;
<ASTLabelType> _last = null;<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(rewriteMode)>
retval.Tree = (<ASTLabelType>)_first_0;
if ( adaptor.GetParent(retval.Tree)!=null && adaptor.IsNil( adaptor.GetParent(retval.Tree) ) )
retval.Tree = (<ASTLabelType>)adaptor.GetParent(retval.Tree);
<endif>
<if(backtracking)>}<endif>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = (<ASTLabelType>)input.LT(1);
{
<ASTLabelType> _save_last_<treeLevel> = _last;
<ASTLabelType> _first_<treeLevel> = null;
<if(!rewriteMode)>
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)adaptor.GetNilNode();
<endif>
<root:element()>
<if(rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 )<endif>
<if(root.el.rule)>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>.Tree;
<else>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>;
<endif>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( input.LA(1) == Token.DOWN )
{
Match(input, Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
Match(input, Token.UP, null); <checkRuleBacktrackFailure()>
}
<else>
Match(input, Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
Match(input, Token.UP, null); <checkRuleBacktrackFailure()>
<endif>
<if(!rewriteMode)>
adaptor.AddChild(root_<enclosingTreeLevel>, root_<treeLevel>);
<endif>
_last = _save_last_<treeLevel>;
}<\n>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
adaptor.AddChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>
}
<endif>
<else> <! rewrite mode !>
<if(backtracking)>if ( state.backtracking==0 )<endif>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>;
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>
if ( state.backtracking == 0 )
{
<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>
}
<endif>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
adaptor.AddChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.DupNode(<label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>
<if(!rewriteMode)>
adaptor.AddChild(root_<treeLevel>, <label>.Tree);
<else> <! rewrite mode !>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>.Tree;
<endif>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking == 0 ) <endif>root_<treeLevel> = (<ASTLabelType>)adaptor.BecomeRoot(<label>.Tree, root_<treeLevel>);
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".Tree",...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefTrack(...)>
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefTrackAndListLabel(...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefRootTrack(...)>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefRuleRootTrackAndListLabel(...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.NextNode())
<else>
stream_<token>.NextNode()
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(!rewriteMode)>
<if(backtracking)>
if ( state.backtracking==0 )
{
<endif>
retval.Tree = (<ASTLabelType>)adaptor.RulePostProcessing(root_0);
<if(backtracking)>
}
<endif>
<endif>
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,288 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to normal Java output;
* If ASTs are built, then you'll also get ASTDbg.stg loaded.
*/
group Dbg;
@outputFile.imports() ::= <<
<@super.imports()>
using Antlr.Runtime.Debug;
using IOException = System.IO.IOException;
>>
@genericParser.members() ::= <<
<if(grammar.grammarIsRoot)>
public static readonly string[] ruleNames = new string[] {
"invalidRule", <grammar.allImportedRules:{rST | "<rST.name>"}; wrap="\n ", separator=", ">
};<\n>
<endif>
<if(grammar.grammarIsRoot)> <! grammar imports other grammar(s) !>
private int ruleLevel = 0;
public int RuleLevel {
get { return ruleLevel; }
}
public void IncRuleLevel() { ruleLevel++; }
public void DecRuleLevel() { ruleLevel--; }
<if(profile)>
<ctorForProfilingRootGrammar()>
<else>
<ctorForRootGrammar()>
<endif>
<ctorForPredefinedListener()>
<else> <! imported grammar !>
public int RuleLevel {
get { return <grammar.delegators:{g| <g:delegateName()>}>.RuleLevel; }
}
public void IncRuleLevel() { <grammar.delegators:{g| <g:delegateName()>}>.IncRuleLevel(); }
public void DecRuleLevel() { <grammar.delegators:{g| <g:delegateName()>}>.DecRuleLevel(); }
<ctorForDelegateGrammar()>
<endif>
<if(profile)>
override public bool AlreadyParsedRule(IIntStream input, int ruleIndex)
{
((Profiler)dbg).ExamineRuleMemoization(input, ruleIndex, <grammar.composite.rootGrammar.recognizerName>.ruleNames[ruleIndex]);
return base.AlreadyParsedRule(input, ruleIndex);
}<\n>
override public void Memoize(IIntStream input,
int ruleIndex,
int ruleStartIndex)
{
((Profiler)dbg).Memoize(input, ruleIndex, ruleStartIndex, <grammar.composite.rootGrammar.recognizerName>.ruleNames[ruleIndex]);
base.Memoize(input, ruleIndex, ruleStartIndex);
}<\n>
<endif>
protected bool EvalPredicate(bool result, string predicate)
{
dbg.SemanticPredicate(result, predicate);
return result;
}<\n>
>>
ctorForRootGrammar() ::= <<
<! bug: can't use <@super.members()> cut-n-paste instead !>
<! Same except we add port number and profile stuff if root grammar !>
public <name>(<inputStreamType> input)
: this(input, DebugEventSocketProxy.DEFAULT_DEBUGGER_PORT, new RecognizerSharedState()) {
}
public <name>(<inputStreamType> input, int port, RecognizerSharedState state)
: base(input, state) {
<parserCtorBody()>
<createListenerAndHandshake()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}<\n>
>>
ctorForProfilingRootGrammar() ::= <<
<! bug: can't use <@super.members()> cut-n-paste instead !>
public <name>(<inputStreamType> input) {
this(input, new Profiler(null), new RecognizerSharedState());
}
public <name>(<inputStreamType> input, IDebugEventListener dbg, RecognizerSharedState state)
: base(input, dbg, state) {
Profiler p = (Profiler)dbg;
p.setParser(this);
<parserCtorBody()>
<grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}
<\n>
>>
/** Basically we don't want to set any dbg listeners are root will have it. */
ctorForDelegateGrammar() ::= <<
public <name>(<inputStreamType> input, IDebugEventListener dbg, RecognizerSharedState state<grammar.delegators:{g|, <g.recognizerName> <g:delegateName()>}>)
: base(input, dbg, state) {
<parserCtorBody()>
<grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, this, this.state<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
}<\n>
>>
ctorForPredefinedListener() ::= <<
public <name>(<inputStreamType> input, IDebugEventListener dbg)
: <@superClassRef>base(input, dbg, new RecognizerSharedState())<@end> {
<if(profile)>
Profiler p = (Profiler)dbg;
p.setParser(this);
<endif>
<parserCtorBody()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}<\n>
>>
createListenerAndHandshake() ::= <<
<if(TREE_PARSER)>
DebugEventSocketProxy proxy = new DebugEventSocketProxy(this, port, input.TreeAdaptor);
<else>
DebugEventSocketProxy proxy = new DebugEventSocketProxy(this, port, null);
<endif>
DebugListener = proxy;
try
{
proxy.Handshake();
}
catch (IOException ioe)
{
ReportError(ioe);
}
>>
@genericParser.superClassName() ::= "Debug<@super.superClassName()>"
@rule.preamble() ::= <<
try {
dbg.EnterRule(GrammarFileName, "<ruleName>");
if ( RuleLevel==0 ) {dbg.Commence();}
IncRuleLevel();
dbg.Location(<ruleDescriptor.tree.line>, <ruleDescriptor.tree.column>);<\n>
>>
@lexer.debugAddition() ::= ", dbg"
@rule.postamble() ::= <<
dbg.Location(<ruleDescriptor.EORNode.line>, <ruleDescriptor.EORNode.column>);<\n>
}
finally {
dbg.ExitRule(GrammarFileName, "<ruleName>");
DecRuleLevel();
if ( RuleLevel==0 ) {dbg.Terminate();}
}<\n>
>>
@synpred.start() ::= "dbg.BeginBacktrack(state.backtracking);"
@synpred.stop() ::= "dbg.EndBacktrack(state.backtracking, success);"
// Common debug event triggers used by region overrides below
enterSubRule() ::=
"try { dbg.EnterSubRule(<decisionNumber>);<\n>"
exitSubRule() ::=
"} finally { dbg.ExitSubRule(<decisionNumber>); }<\n>"
enterDecision() ::=
"try { dbg.EnterDecision(<decisionNumber>);<\n>"
exitDecision() ::=
"} finally { dbg.ExitDecision(<decisionNumber>); }<\n>"
enterAlt(n) ::= "dbg.EnterAlt(<n>);<\n>"
// Region overrides that tell various constructs to add debugging triggers
@block.predecision() ::= "<enterSubRule()><enterDecision()>"
@block.postdecision() ::= "<exitDecision()>"
@block.postbranch() ::= "<exitSubRule()>"
@ruleBlock.predecision() ::= "<enterDecision()>"
@ruleBlock.postdecision() ::= "<exitDecision()>"
@ruleBlockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@blockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@positiveClosureBlock.preloop() ::= "<enterSubRule()>"
@positiveClosureBlock.postloop() ::= "<exitSubRule()>"
@positiveClosureBlock.predecision() ::= "<enterDecision()>"
@positiveClosureBlock.postdecision() ::= "<exitDecision()>"
@positiveClosureBlock.earlyExitException() ::=
"dbg.RecognitionException(eee);<\n>"
@closureBlock.preloop() ::= "<enterSubRule()>"
@closureBlock.postloop() ::= "<exitSubRule()>"
@closureBlock.predecision() ::= "<enterDecision()>"
@closureBlock.postdecision() ::= "<exitDecision()>"
@altSwitchCase.prealt() ::= "<enterAlt(n=i)>"
@element.prematch() ::=
"dbg.Location(<it.line>,<it.pos>);"
@matchSet.mismatchedSetException() ::=
"dbg.RecognitionException(mse);"
@dfaState.noViableAltException() ::= "dbg.RecognitionException(nvae_d<decisionNumber>s<stateNumber>);"
@dfaStateSwitch.noViableAltException() ::= "dbg.RecognitionException(nvae_d<decisionNumber>s<stateNumber>);"
dfaDecision(decisionNumber,description) ::= <<
try
{
isCyclicDecision = true;
<super.dfaDecision(...)>
}
catch (NoViableAltException nvae)
{
dbg.RecognitionException(nvae);
throw nvae;
}
>>
@cyclicDFA.dbgCtor() ::= <<
public DFA<dfa.decisionNumber>(BaseRecognizer recognizer, IDebugEventListener dbg) : this(recognizer)
{
this.dbg = dbg;
}
>>
@cyclicDFA.debugMember() ::= <<
IDebugEventListener dbg;
>>
@cyclicDFA.errorMethod() ::= <<
public override void Error(NoViableAltException nvae)
{
dbg.RecognitionException(nvae);
}
>>
/** Force predicate validation to trigger an event */
evalPredicate(pred,description) ::= <<
EvalPredicate(<pred>,"<description>")
>>

View File

@ -0,0 +1,173 @@
/*
[The "BSD licence"]
Copyright (c) 2007-2008 Johannes Luber
Copyright (c) 2005-2007 Kunle Odutola
Copyright (c) 2005 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template subgroup to add template rewrite output
* If debugging, then you'll also get STDbg.stg loaded.
*/
group ST;
@outputFile.imports() ::= <<
<@super.imports()>
using Antlr.StringTemplate;
using Antlr.StringTemplate.Language;
<if(!backtracking)>
using Hashtable = System.Collections.Hashtable;
<endif>
>>
/** Add this to each rule's return value struct */
@returnScope.ruleReturnMembers() ::= <<
private StringTemplate st;
public StringTemplate ST { get { return st; } set { st = value; } }
public override object Template { get { return st; } }
public override string ToString() { return (st == null) ? null : st.ToString(); }
>>
@genericParser.members() ::= <<
<@super.members()>
protected StringTemplateGroup templateLib =
new StringTemplateGroup("<name>Templates", typeof(AngleBracketTemplateLexer));
public StringTemplateGroup TemplateLib
{
get { return this.templateLib; }
set { this.templateLib = value; }
}
/// \<summary> Allows convenient multi-value initialization:
/// "new STAttrMap().Add(...).Add(...)"
/// \</summary>
protected class STAttrMap : Hashtable
{
public STAttrMap Add(string attrName, object value)
{
base.Add(attrName, value);
return this;
}
public STAttrMap Add(string attrName, int value)
{
base.Add(attrName, value);
return this;
}
}
>>
/** x+=rule when output=template */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".Template",...)>
>>
rewriteTemplate(alts) ::= <<
// TEMPLATE REWRITE
<if(backtracking)>
if ( state.backtracking==0 )
{
<alts:rewriteTemplateAlt(); separator="else ">
<if(rewriteMode)><replaceTextInLine()><endif>
}
<else>
<alts:rewriteTemplateAlt(); separator="else ">
<if(rewriteMode)><replaceTextInLine()><endif>
<endif>
>>
replaceTextInLine() ::= <<
<if(TREE_PARSER)>
((TokenRewriteStream)input.TokenStream).Replace(
input.TreeAdaptor.GetTokenStartIndex(retval.Start),
input.TreeAdaptor.GetTokenStopIndex(retval.Start),
retval.ST);
<else>
((TokenRewriteStream)input).Replace(
((IToken)retval.Start).TokenIndex,
input.LT(-1).TokenIndex,
retval.ST);
<endif>
>>
rewriteTemplateAlt() ::= <<
// <it.description>
<if(it.pred)>
if (<it.pred>) {
retval.ST = <it.alt>;
}<\n>
<else>
{
retval.ST = <it.alt>;
}<\n>
<endif>
>>
rewriteEmptyTemplate(alts) ::= <<
null;
>>
/** Invoke a template with a set of attribute name/value pairs.
* Set the value of the rule's template *after* having set
* the attributes because the rule's template might be used as
* an attribute to build a bigger template; you get a self-embedded
* template.
*/
rewriteExternalTemplate(name,args) ::= <<
templateLib.GetInstanceOf("<name>"<if(args)>,
new STAttrMap()<args:{a | .Add("<a.name>", <a.value>)}>
<endif>)
>>
/** expr is a string expression that says what template to load */
rewriteIndirectTemplate(expr,args) ::= <<
templateLib.GetInstanceOf(<expr><if(args)>,
new STAttrMap()<args:{a | .Add("<a.name>", <a.value>)}>
<endif>)
>>
/** Invoke an inline template with a set of attribute name/value pairs */
rewriteInlineTemplate(args, template) ::= <<
new StringTemplate(templateLib, "<template>"<if(args)>,
new STAttrMap()<args:{a | .Add("<a.name>", <a.value>)}>
<endif>)
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
<action>
>>
/** An action has %st.attrName=expr; or %{st}.attrName=expr; */
actionSetAttribute(st,attrName,expr) ::= <<
(<st>).SetAttribute("<attrName>",<expr>);
>>
/** Translate %{stringExpr} */
actionStringConstructor(stringExpr) ::= <<
new StringTemplate(templateLib,<stringExpr>)
>>

View File

@ -0,0 +1,393 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
@outputFile.imports() ::= <<
<@super.imports()>
<if(!TREE_PARSER)><! tree parser would already have imported !>
import org.antlr.runtime.tree.*;<\n>
<endif>
>>
@genericParser.members() ::= <<
<@super.members()>
<parserMembers()>
>>
/** Add an adaptor property that knows how to build trees */
parserMembers() ::= <<
protected TreeAdaptor adaptor = new CommonTreeAdaptor();<\n>
public void setTreeAdaptor(TreeAdaptor adaptor) {
this.adaptor = adaptor;
<grammar.directDelegates:{g|<g:delegateName()>.setTreeAdaptor(this.adaptor);}>
}
public TreeAdaptor getTreeAdaptor() {
return adaptor;
}
>>
@returnScope.ruleReturnMembers() ::= <<
<ASTLabelType> tree;
public Object getTree() { return tree; }
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> root_0 = null;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{<ASTLabelType> <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.tokenListLabels:{<ASTLabelType> <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{RewriteRule<rewriteElementType>Stream stream_<it>=new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>");}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{RewriteRuleSubtreeStream stream_<it>=new RewriteRuleSubtreeStream(adaptor,"rule <it>");}; separator="\n">
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = (<ASTLabelType>)adaptor.nil();<\n>
<endif>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.add(<label>);<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<token>.add(<label>);<\n>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule.name>.add(<label>.getTree());
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>stream_<rule>.add(<label>.getTree());
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem=label+".getTree()",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
// AST REWRITE
// elements: <referencedElementsDeep; separator=", ">
// token labels: <referencedTokenLabels; separator=", ">
// rule labels: <referencedRuleLabels; separator=", ">
// token list labels: <referencedTokenListLabels; separator=", ">
// rule list labels: <referencedRuleListLabels; separator=", ">
<if(backtracking)>
if ( state.backtracking==0 ) {<\n>
<endif>
<prevRuleRootRef()>.tree = root_0;
<rewriteCodeLabels()>
root_0 = (<ASTLabelType>)adaptor.nil();
<alts:rewriteAlt(); separator="else ">
<! if tree parser and rewrite=true !>
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.tree = (<ASTLabelType>)adaptor.rulePostProcessing(root_0);
input.replaceChildren(adaptor.getParent(retval.start),
adaptor.getChildIndex(retval.start),
adaptor.getChildIndex(_last),
retval.tree);
<endif>
<endif>
<! if parser or tree-parser && rewrite!=true, we need to set result !>
<if(!TREE_PARSER)>
<prevRuleRootRef()>.tree = root_0;
<else>
<if(!rewriteMode)>
<prevRuleRootRef()>.tree = root_0;
<endif>
<endif>
<if(backtracking)>
}
<endif>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{RewriteRule<rewriteElementType>Stream stream_<it>=new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>",<it>);};
separator="\n"
>
<referencedTokenListLabels
:{RewriteRule<rewriteElementType>Stream stream_<it>=new RewriteRule<rewriteElementType>Stream(adaptor,"token <it>", list_<it>);};
separator="\n"
>
<referencedRuleLabels
:{RewriteRuleSubtreeStream stream_<it>=new RewriteRuleSubtreeStream(adaptor,"token <it>",<it>!=null?<it>.tree:null);};
separator="\n"
>
<referencedRuleListLabels
:{RewriteRuleSubtreeStream stream_<it>=new RewriteRuleSubtreeStream(adaptor,"token <it>",list_<it>);};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
if ( <referencedElementsDeep:{el | stream_<el>.hasNext()}; separator="||"> ) {
<alt>
}
<referencedElementsDeep:{el | stream_<el>.reset();<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
while ( <referencedElements:{el | stream_<el>.hasNext()}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
if ( !(<referencedElements:{el | stream_<el>.hasNext()}; separator="||">) ) {
throw new RewriteEarlyExitException();
}
while ( <referencedElements:{el | stream_<el>.hasNext()}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>) {
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = null;"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)adaptor.nil();
<root:rewriteElement()>
<children:rewriteElement()>
adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>);
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
adaptor.addChild(root_<treeLevel>, <createRewriteNodeFromElement(...)>);<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode());<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode());<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<createRewriteNodeFromElement(...)>, root_<treeLevel>);<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
adaptor.addChild(root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>);<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>);<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<rule>.nextTree());<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(stream_<rule>.nextNode(), root_<treeLevel>);<\n>
>>
rewriteNodeAction(action) ::= <<
adaptor.addChild(root_<treeLevel>, <action>);<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<action>, root_<treeLevel>);<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree());<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree());<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>);<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
new <hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
(<ASTLabelType>)adaptor.create(<tokenType>, <args; separator=", "><if(!args)>"<tokenType>"<endif>)
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
adaptor.create(<token>, <args; separator=", ">)
<else>
stream_<token>.nextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,87 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to AST stuff. Dynamic inheritance
* hierarchy is set up as ASTDbg : AST : Dbg : Java by code generator.
*/
group ASTDbg;
parserMembers() ::= <<
protected DebugTreeAdaptor adaptor;
public void setTreeAdaptor(TreeAdaptor adaptor) {
<if(grammar.grammarIsRoot)>
this.adaptor = new DebugTreeAdaptor(dbg,adaptor);
<else>
this.adaptor = (DebugTreeAdaptor)adaptor; // delegator sends dbg adaptor
<endif><\n>
<grammar.directDelegates:{g|<g:delegateName()>.setTreeAdaptor(this.adaptor);}>
}
public TreeAdaptor getTreeAdaptor() {
return adaptor;
}<\n>
>>
parserCtorBody() ::= <<
<super.parserCtorBody()>
>>
createListenerAndHandshake() ::= <<
DebugEventSocketProxy proxy =
new DebugEventSocketProxy(this,port,<if(TREE_PARSER)>input.getTreeAdaptor()<else>adaptor<endif>);
setDebugListener(proxy);
set<inputStreamType>(new Debug<inputStreamType>(input,proxy));
try {
proxy.handshake();
}
catch (IOException ioe) {
reportError(ioe);
}
>>
@ctorForRootGrammar.finally() ::= <<
TreeAdaptor adap = new CommonTreeAdaptor();
setTreeAdaptor(adap);
proxy.setTreeAdaptor(adap);
>>
@ctorForProfilingRootGrammar.finally() ::=<<
TreeAdaptor adap = new CommonTreeAdaptor();
setTreeAdaptor(adap);
proxy.setTreeAdaptor(adap);
>>
@ctorForPredefinedListener.superClassRef() ::= "super(input, dbg);"
@ctorForPredefinedListener.finally() ::=<<
<if(grammar.grammarIsRoot)> <! don't create new adaptor for delegates !>
TreeAdaptor adap = new CommonTreeAdaptor();
setTreeAdaptor(adap);<\n>
<endif>
>>
@rewriteElement.pregen() ::= "dbg.location(<e.line>,<e.pos>);"

View File

@ -0,0 +1,190 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
@rule.setErrorReturnValue() ::= <<
retval.tree = (<ASTLabelType>)adaptor.errorNode(input, retval.start, input.LT(-1), re);
<! System.out.println("<ruleName> returns "+((CommonTree)retval.tree).toStringTree()); !>
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
adaptor.addChild(root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
root_0 = (<ASTLabelType>)adaptor.becomeRoot(<label>_tree, root_0);
<if(backtracking)>}<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking==0 ) <endif>adaptor.addChild(root_0, <createNodeFromToken(...)>);})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label>=(<labelType>)input.LT(1);<\n>
<endif>
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( state.backtracking==0 ) <endif>root_0 = (<ASTLabelType>)adaptor.becomeRoot(<createNodeFromToken(...)>, root_0);})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>adaptor.addChild(root_0, <label>.getTree());
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>root_0 = (<ASTLabelType>)adaptor.becomeRoot(<label>.getTree(), root_0);
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".getTree()",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = (<ASTLabelType>)adaptor.create(<label>);
adaptor.addChild(root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = (<ASTLabelType>)adaptor.create(<label>);
root_0 = (<ASTLabelType>)adaptor.becomeRoot(<label>_tree, root_0);
<if(backtracking)>}<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
new <hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
(<ASTLabelType>)adaptor.create(<label>)
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(backtracking)>if ( state.backtracking==0 ) {<\n><endif>
retval.tree = (<ASTLabelType>)adaptor.rulePostProcessing(root_0);
adaptor.setTokenBoundaries(retval.tree, retval.start, retval.stop);
<if(backtracking)>}<endif>
>>

View File

@ -0,0 +1,281 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> _first_0 = null;
<ASTLabelType> _last = null;<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(rewriteMode)>
retval.tree = (<ASTLabelType>)_first_0;
if ( adaptor.getParent(retval.tree)!=null && adaptor.isNil( adaptor.getParent(retval.tree) ) )
retval.tree = (<ASTLabelType>)adaptor.getParent(retval.tree);
<endif>
<if(backtracking)>}<endif>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = (<ASTLabelType>)input.LT(1);
{
<ASTLabelType> _save_last_<treeLevel> = _last;
<ASTLabelType> _first_<treeLevel> = null;
<if(!rewriteMode)>
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)adaptor.nil();
<endif>
<root:element()>
<if(rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 )<endif>
<if(root.el.rule)>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>.tree;
<else>
if ( _first_<enclosingTreeLevel>==null ) _first_<enclosingTreeLevel> = <root.el.label>;
<endif>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( input.LA(1)==Token.DOWN ) {
match(input, Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
match(input, Token.UP, null); <checkRuleBacktrackFailure()>
}
<else>
match(input, Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
match(input, Token.UP, null); <checkRuleBacktrackFailure()>
<endif>
<if(!rewriteMode)>
adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>);
<endif>
_last = _save_last_<treeLevel>;
}<\n>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.dupNode(<label>);
<endif><\n>
adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<else> <! rewrite mode !>
<if(backtracking)>if ( state.backtracking==0 )<endif>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>;
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.dupNode(<label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.dupNode(<label>);
<endif><\n>
adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = (<ASTLabelType>)adaptor.dupNode(<label>);
<endif><\n>
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>
<if(!rewriteMode)>
adaptor.addChild(root_<treeLevel>, <label>.getTree());
<else> <! rewrite mode !>
if ( _first_<treeLevel>==null ) _first_<treeLevel> = <label>.tree;
<endif>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<label>.getTree(), root_<treeLevel>);
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefTrack(...)>
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefTrackAndListLabel(...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefRootTrack(...)>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = (<ASTLabelType>)input.LT(1);
<super.ruleRefRuleRootTrackAndListLabel(...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextNode())
<else>
stream_<token>.nextNode()
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(!rewriteMode)>
<if(backtracking)>if ( state.backtracking==0 ) {<\n><endif>
retval.tree = (<ASTLabelType>)adaptor.rulePostProcessing(root_0);
<if(backtracking)>}<endif>
<endif>
>>

View File

@ -0,0 +1,259 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to normal Java output;
* If ASTs are built, then you'll also get ASTDbg.stg loaded.
*/
group Dbg;
@outputFile.imports() ::= <<
<@super.imports()>
import org.antlr.runtime.debug.*;
import java.io.IOException;
>>
@genericParser.members() ::= <<
<if(grammar.grammarIsRoot)>
public static final String[] ruleNames = new String[] {
"invalidRule", <grammar.allImportedRules:{rST | "<rST.name>"}; wrap="\n ", separator=", ">
};<\n>
<endif>
<if(grammar.grammarIsRoot)> <! grammar imports other grammar(s) !>
public int ruleLevel = 0;
public int getRuleLevel() { return ruleLevel; }
public void incRuleLevel() { ruleLevel++; }
public void decRuleLevel() { ruleLevel--; }
<if(profile)>
<ctorForProfilingRootGrammar()>
<else>
<ctorForRootGrammar()>
<endif>
<ctorForPredefinedListener()>
<else> <! imported grammar !>
public int getRuleLevel() { return <grammar.delegators:{g| <g:delegateName()>}>.getRuleLevel(); }
public void incRuleLevel() { <grammar.delegators:{g| <g:delegateName()>}>.incRuleLevel(); }
public void decRuleLevel() { <grammar.delegators:{g| <g:delegateName()>}>.decRuleLevel(); }
<ctorForDelegateGrammar()>
<endif>
<if(profile)>
public boolean alreadyParsedRule(IntStream input, int ruleIndex) {
((Profiler)dbg).examineRuleMemoization(input, ruleIndex, <grammar.composite.rootGrammar.recognizerName>.ruleNames[ruleIndex]);
return super.alreadyParsedRule(input, ruleIndex);
}<\n>
public void memoize(IntStream input,
int ruleIndex,
int ruleStartIndex)
{
((Profiler)dbg).memoize(input, ruleIndex, ruleStartIndex, <grammar.composite.rootGrammar.recognizerName>.ruleNames[ruleIndex]);
super.memoize(input, ruleIndex, ruleStartIndex);
}<\n>
<endif>
protected boolean evalPredicate(boolean result, String predicate) {
dbg.semanticPredicate(result, predicate);
return result;
}<\n>
>>
ctorForRootGrammar() ::= <<
<! bug: can't use <@super.members()> cut-n-paste instead !>
<! Same except we add port number and profile stuff if root grammar !>
public <name>(<inputStreamType> input) {
this(input, DebugEventSocketProxy.DEFAULT_DEBUGGER_PORT, new RecognizerSharedState());
}
public <name>(<inputStreamType> input, int port, RecognizerSharedState state) {
super(input, state);
<parserCtorBody()>
<createListenerAndHandshake()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}<\n>
>>
ctorForProfilingRootGrammar() ::= <<
<! bug: can't use <@super.members()> cut-n-paste instead !>
public <name>(<inputStreamType> input) {
this(input, new Profiler(null), new RecognizerSharedState());
}
public <name>(<inputStreamType> input, DebugEventListener dbg, RecognizerSharedState state) {
super(input, dbg, state);
Profiler p = (Profiler)dbg;
p.setParser(this);
<parserCtorBody()>
<grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}
<\n>
>>
/** Basically we don't want to set any dbg listeners are root will have it. */
ctorForDelegateGrammar() ::= <<
public <name>(<inputStreamType> input, DebugEventListener dbg, RecognizerSharedState state<grammar.delegators:{g|, <g.recognizerName> <g:delegateName()>}>) {
super(input, dbg, state);
<parserCtorBody()>
<grammar.directDelegates:
{g|<g:delegateName()> = new <g.recognizerName>(input, this, this.state<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
}<\n>
>>
ctorForPredefinedListener() ::= <<
public <name>(<inputStreamType> input, DebugEventListener dbg) {
<@superClassRef>super(input, dbg, new RecognizerSharedState());<@end>
<if(profile)>
Profiler p = (Profiler)dbg;
p.setParser(this);
<endif>
<parserCtorBody()>
<grammar.directDelegates:{g|<g:delegateName()> = new <g.recognizerName>(input, dbg, this.state, this<grammar.delegators:{g|, <g:delegateName()>}>);}; separator="\n">
<@finally()>
}<\n>
>>
createListenerAndHandshake() ::= <<
<if(TREE_PARSER)>
DebugEventSocketProxy proxy =
new DebugEventSocketProxy(this, port, input.getTreeAdaptor());<\n>
<else>
DebugEventSocketProxy proxy =
new DebugEventSocketProxy(this, port, null);<\n>
<endif>
setDebugListener(proxy);
try {
proxy.handshake();
}
catch (IOException ioe) {
reportError(ioe);
}
>>
@genericParser.superClassName() ::= "Debug<@super.superClassName()>"
@rule.preamble() ::= <<
try { dbg.enterRule(getGrammarFileName(), "<ruleName>");
if ( getRuleLevel()==0 ) {dbg.commence();}
incRuleLevel();
dbg.location(<ruleDescriptor.tree.line>, <ruleDescriptor.tree.column>);<\n>
>>
@rule.postamble() ::= <<
dbg.location(<ruleDescriptor.EORNode.line>, <ruleDescriptor.EORNode.column>);<\n>
}
finally {
dbg.exitRule(getGrammarFileName(), "<ruleName>");
decRuleLevel();
if ( getRuleLevel()==0 ) {dbg.terminate();}
}<\n>
>>
@synpred.start() ::= "dbg.beginBacktrack(state.backtracking);"
@synpred.stop() ::= "dbg.endBacktrack(state.backtracking, success);"
// Common debug event triggers used by region overrides below
enterSubRule() ::=
"try { dbg.enterSubRule(<decisionNumber>);<\n>"
exitSubRule() ::=
"} finally {dbg.exitSubRule(<decisionNumber>);}<\n>"
enterDecision() ::=
"try { dbg.enterDecision(<decisionNumber>);<\n>"
exitDecision() ::=
"} finally {dbg.exitDecision(<decisionNumber>);}<\n>"
enterAlt(n) ::= "dbg.enterAlt(<n>);<\n>"
// Region overrides that tell various constructs to add debugging triggers
@block.predecision() ::= "<enterSubRule()><enterDecision()>"
@block.postdecision() ::= "<exitDecision()>"
@block.postbranch() ::= "<exitSubRule()>"
@ruleBlock.predecision() ::= "<enterDecision()>"
@ruleBlock.postdecision() ::= "<exitDecision()>"
@ruleBlockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@blockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@positiveClosureBlock.preloop() ::= "<enterSubRule()>"
@positiveClosureBlock.postloop() ::= "<exitSubRule()>"
@positiveClosureBlock.predecision() ::= "<enterDecision()>"
@positiveClosureBlock.postdecision() ::= "<exitDecision()>"
@positiveClosureBlock.earlyExitException() ::=
"dbg.recognitionException(eee);<\n>"
@closureBlock.preloop() ::= "<enterSubRule()>"
@closureBlock.postloop() ::= "<exitSubRule()>"
@closureBlock.predecision() ::= "<enterDecision()>"
@closureBlock.postdecision() ::= "<exitDecision()>"
@altSwitchCase.prealt() ::= "<enterAlt(n=i)>"
@element.prematch() ::=
"dbg.location(<it.line>,<it.pos>);"
@matchSet.mismatchedSetException() ::=
"dbg.recognitionException(mse);"
@dfaState.noViableAltException() ::= "dbg.recognitionException(nvae);"
@dfaStateSwitch.noViableAltException() ::= "dbg.recognitionException(nvae);"
dfaDecision(decisionNumber,description) ::= <<
try {
isCyclicDecision = true;
<super.dfaDecision(...)>
}
catch (NoViableAltException nvae) {
dbg.recognitionException(nvae);
throw nvae;
}
>>
@cyclicDFA.errorMethod() ::= <<
public void error(NoViableAltException nvae) {
dbg.recognitionException(nvae);
}
>>
/** Force predicate validation to trigger an event */
evalPredicate(pred,description) ::= <<
evalPredicate(<pred>,"<description>")
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,163 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template subgroup to add template rewrite output
* If debugging, then you'll also get STDbg.stg loaded.
*/
group ST;
@outputFile.imports() ::= <<
<@super.imports()>
import org.antlr.stringtemplate.*;
import org.antlr.stringtemplate.language.*;
import java.util.HashMap;
>>
/** Add this to each rule's return value struct */
@returnScope.ruleReturnMembers() ::= <<
public StringTemplate st;
public Object getTemplate() { return st; }
public String toString() { return st==null?null:st.toString(); }
>>
@genericParser.members() ::= <<
<@super.members()>
protected StringTemplateGroup templateLib =
new StringTemplateGroup("<name>Templates", AngleBracketTemplateLexer.class);
public void setTemplateLib(StringTemplateGroup templateLib) {
this.templateLib = templateLib;
}
public StringTemplateGroup getTemplateLib() {
return templateLib;
}
/** allows convenient multi-value initialization:
* "new STAttrMap().put(...).put(...)"
*/
public static class STAttrMap extends HashMap {
public STAttrMap put(String attrName, Object value) {
super.put(attrName, value);
return this;
}
public STAttrMap put(String attrName, int value) {
super.put(attrName, new Integer(value));
return this;
}
}
>>
/** x+=rule when output=template */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".getTemplate()",...)>
>>
rewriteTemplate(alts) ::= <<
// TEMPLATE REWRITE
<if(backtracking)>
if ( state.backtracking==0 ) {
<alts:rewriteTemplateAlt(); separator="else ">
<if(rewriteMode)><replaceTextInLine()><endif>
}
<else>
<alts:rewriteTemplateAlt(); separator="else ">
<if(rewriteMode)><replaceTextInLine()><endif>
<endif>
>>
replaceTextInLine() ::= <<
<if(TREE_PARSER)>
((TokenRewriteStream)input.getTokenStream()).replace(
input.getTreeAdaptor().getTokenStartIndex(retval.start),
input.getTreeAdaptor().getTokenStopIndex(retval.start),
retval.st);
<else>
((TokenRewriteStream)input).replace(
((Token)retval.start).getTokenIndex(),
input.LT(-1).getTokenIndex(),
retval.st);
<endif>
>>
rewriteTemplateAlt() ::= <<
// <it.description>
<if(it.pred)>
if (<it.pred>) {
retval.st = <it.alt>;
}<\n>
<else>
{
retval.st = <it.alt>;
}<\n>
<endif>
>>
rewriteEmptyTemplate(alts) ::= <<
null;
>>
/** Invoke a template with a set of attribute name/value pairs.
* Set the value of the rule's template *after* having set
* the attributes because the rule's template might be used as
* an attribute to build a bigger template; you get a self-embedded
* template.
*/
rewriteExternalTemplate(name,args) ::= <<
templateLib.getInstanceOf("<name>"<if(args)>,
new STAttrMap()<args:{a | .put("<a.name>", <a.value>)}>
<endif>)
>>
/** expr is a string expression that says what template to load */
rewriteIndirectTemplate(expr,args) ::= <<
templateLib.getInstanceOf(<expr><if(args)>,
new STAttrMap()<args:{a | .put("<a.name>", <a.value>)}>
<endif>)
>>
/** Invoke an inline template with a set of attribute name/value pairs */
rewriteInlineTemplate(args, template) ::= <<
new StringTemplate(templateLib, "<template>"<if(args)>,
new STAttrMap()<args:{a | .put("<a.name>", <a.value>)}>
<endif>)
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
<action>
>>
/** An action has %st.attrName=expr; or %{st}.attrName=expr; */
actionSetAttribute(st,attrName,expr) ::= <<
(<st>).setAttribute("<attrName>",<expr>);
>>
/** Translate %{stringExpr} */
actionStringConstructor(stringExpr) ::= <<
new StringTemplate(templateLib,<stringExpr>)
>>

View File

@ -0,0 +1,387 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
@outputFile.imports() ::= <<
<@super.imports()>
>>
@genericParser.members() ::= <<
<@super.members()>
<parserMembers()>
>>
/** Add an adaptor property that knows how to build trees */
parserMembers() ::= <<
<!protected TreeAdaptor adaptor = new CommonTreeAdaptor();<\n>!>
setTreeAdaptor: function(adaptor) {
this.adaptor = adaptor;
},
getTreeAdaptor: function() {
return this.adaptor;
},
>>
@returnScope.ruleReturnMembers() ::= <<
getTree: function() { return this.tree; }
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
var root_0 = null;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{var <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.tokenListLabels:{var <it.label.text>_tree=null;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{var stream_<it>=new org.antlr.runtime.tree.RewriteRuleTokenStream(this.adaptor,"token <it>");}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{var stream_<it>=new org.antlr.runtime.tree.RewriteRuleSubtreeStream(this.adaptor,"rule <it>");}; separator="\n">
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = this.adaptor.nil();<\n>
<endif>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>stream_<token>.add(<label>);<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>stream_<token>.add(<label>);<\n>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>stream_<rule.name>.add(<label>.getTree());
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>stream_<rule>.add(<label>.getTree());
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem=label+".getTree()",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
// AST REWRITE
// elements: <referencedElementsDeep; separator=", ">
// token labels: <referencedTokenLabels; separator=", ">
// rule labels: <referencedRuleLabels; separator=", ">
// token list labels: <referencedTokenListLabels; separator=", ">
// rule list labels: <referencedRuleListLabels; separator=", ">
<if(backtracking)>
if ( this.state.backtracking===0 ) {<\n>
<endif>
<prevRuleRootRef()>.tree = root_0;
<rewriteCodeLabels()>
root_0 = this.adaptor.nil();
<alts:rewriteAlt(); separator="else ">
<! if tree parser and rewrite=true !>
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.tree = this.adaptor.rulePostProcessing(root_0);
this.input.replaceChildren(this.adaptor.getParent(retval.start),
this.adaptor.getChildIndex(retval.start),
this.adaptor.getChildIndex(_last),
retval.tree);
<endif>
<endif>
<! if parser or rewrite!=true, we need to set result !>
<if(!TREE_PARSER)>
<prevRuleRootRef()>.tree = root_0;
<endif>
<if(!rewriteMode)>
<prevRuleRootRef()>.tree = root_0;
<endif>
<if(backtracking)>
}
<endif>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{var stream_<it>=new org.antlr.runtime.tree.RewriteRule<rewriteElementType>Stream(this.adaptor,"token <it>",<it>);};
separator="\n"
>
<referencedTokenListLabels
:{var stream_<it>=new org.antlr.runtime.tree.RewriteRule<rewriteElementType>Stream(this.adaptor,"token <it>", list_<it>);};
separator="\n"
>
<referencedRuleLabels
:{var stream_<it>=new org.antlr.runtime.tree.RewriteRuleSubtreeStream(this.adaptor,"token <it>",<it>!=null?<it>.tree:null);};
separator="\n"
>
<referencedRuleListLabels
:{var stream_<it>=new org.antlr.runtime.tree.RewriteRuleSubtreeStream(this.adaptor,"token <it>",list_<it>);};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
if ( <referencedElementsDeep:{el | stream_<el>.hasNext()}; separator="||"> ) {
<alt>
}
<referencedElementsDeep:{el | stream_<el>.reset();<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
while ( <referencedElements:{el | stream_<el>.hasNext()}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
if ( !(<referencedElements:{el | stream_<el>.hasNext()}; separator="||">) ) {
throw new org.antlr.runtime.tree.RewriteEarlyExitException();
}
while ( <referencedElements:{el | stream_<el>.hasNext()}; separator="||"> ) {
<alt>
}
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>) {
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = null;"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
var root_<treeLevel> = this.adaptor.nil();
<root:rewriteElement()>
<children:rewriteElement()>
this.adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>);
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
this.adaptor.addChild(root_<treeLevel>, <createRewriteNodeFromElement(...)>);<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
this.adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode());<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
this.adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode());<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(<createRewriteNodeFromElement(...)>, root_<treeLevel>);<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
this.adaptor.addChild(root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>);<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(<createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>);<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
this.adaptor.addChild(root_<treeLevel>, stream_<rule>.nextTree());<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(stream_<rule>.nextNode(), root_<treeLevel>);<\n>
>>
rewriteNodeAction(action) ::= <<
this.adaptor.addChild(root_<treeLevel>, <action>);<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(<action>, root_<treeLevel>);<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
this.adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree());<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
this.adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree());<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>);<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = this.adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>);<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
new <hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
this.adaptor.create(<tokenType>, <args; separator=", "><if(!args)>"<tokenType>"<endif>)
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
this.adaptor.create(<token>, <args; separator=", ">)
<else>
stream_<token>.nextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,161 @@
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
@rule.setErrorReturnValue() ::= <<
retval.tree = this.adaptor.errorNode(this.input, retval.start, this.input.LT(-1), re);
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
this.adaptor.addChild(root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<label>_tree = <createNodeFromToken(...)>;
root_0 = this.adaptor.becomeRoot(<label>_tree, root_0);
<if(backtracking)>}<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( this.state.backtracking===0 ) <endif>this.adaptor.addChild(root_0, <createNodeFromToken(...)>);})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label>=input.LT(1);<\n>
<endif>
<super.matchSet(..., postmatchCode={<if(backtracking)>if ( this.state.backtracking===0 ) <endif>root_0 = this.adaptor.becomeRoot(<createNodeFromToken(...)>, root_0);})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>this.adaptor.addChild(root_0, <label>.getTree());
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>root_0 = this.adaptor.becomeRoot(<label>.getTree(), root_0);
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".getTree()",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<label>_tree = this.adaptor.create(<label>);
this.adaptor.addChild(root_0, <label>_tree);
<if(backtracking)>}<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<label>_tree = this.adaptor.create(<label>);
root_0 = this.adaptor.becomeRoot(<label>_tree, root_0);
<if(backtracking)>}<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
new <hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
this.adaptor.create(<label>)
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(backtracking)>if ( this.state.backtracking===0 ) {<\n><endif>
retval.tree = this.adaptor.rulePostProcessing(root_0);
this.adaptor.setTokenBoundaries(retval.tree, retval.start, retval.stop);
<if(backtracking)>}<endif>
>>

View File

@ -0,0 +1,229 @@
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
var _first_0 = null;
var _last = null;<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<if(rewriteMode)>
retval.tree = _first_0;
if ( this.adaptor.getParent(retval.tree) && this.adaptor.isNil( this.adaptor.getParent(retval.tree) ) )
retval.tree = this.adaptor.getParent(retval.tree);
<endif>
<if(backtracking)>}<endif>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = this.input.LT(1);
{
var _save_last_<treeLevel> = _last;
var _first_<treeLevel> = null;
<if(!rewriteMode)>
var root_<treeLevel> = this.adaptor.nil();
<endif>
<root:element()>
<if(rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 )<endif>
<if(root.el.rule)>
if ( !_first_<enclosingTreeLevel> ) _first_<enclosingTreeLevel> = <root.el.label>.tree;
<else>
if ( !_first_<enclosingTreeLevel> ) _first_<enclosingTreeLevel> = <root.el.label>;
<endif>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( this.input.LA(1)==org.antlr.runtime.Token.DOWN ) {
this.match(this.input, org.antlr.runtime.Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
this.match(this.input, org.antlr.runtime.Token.UP, null); <checkRuleBacktrackFailure()>
}
<else>
this.match(this.input, org.antlr.runtime.Token.DOWN, null); <checkRuleBacktrackFailure()>
<children:element()>
this.match(this.input, org.antlr.runtime.Token.UP, null); <checkRuleBacktrackFailure()>
<endif>
<if(!rewriteMode)>
this.adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>);
<endif>
_last = _save_last_<treeLevel>;
}<\n>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = this.input.LT(1);
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = this.input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = this.adaptor.dupNode(<label>);
<endif><\n>
this.adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<else> <! rewrite mode !>
<if(backtracking)>if ( this.state.backtracking===0 )<endif>
if ( !_first_<treeLevel> ) _first_<treeLevel> = <label>;
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = this.input.LT(1);
<super.tokenRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = this.adaptor.dupNode(<label>);
<endif><\n>
root_<treeLevel> = this.adaptor.becomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = this.input.LT(1);
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = this.adaptor.dupNode(<label>);
<endif><\n>
this.adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = this.input.LT(1);
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<endif>
<if(hetero)>
<label>_tree = new <hetero>(<label>);
<else>
<label>_tree = this.adaptor.dupNode(<label>);
<endif><\n>
root_<treeLevel> = this.adaptor.becomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
<endif>
}
)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = this.input.LT(1);
<super.ruleRef(...)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>
<if(!rewriteMode)>
this.adaptor.addChild(root_<treeLevel>, <label>.getTree());
<else> <! rewrite mode !>
if ( !_first_<treeLevel> ) _first_<treeLevel> = <label>.tree;
<endif>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = this.input.LT(1);
<super.ruleRef(...)>
<if(!rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 ) <endif>root_<treeLevel> = this.adaptor.becomeRoot(<label>.getTree(), root_<treeLevel>);
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextNode())
<else>
stream_<token>.nextNode()
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(!rewriteMode)>
<if(backtracking)>if ( this.state.backtracking===0 ) {<\n><endif>
retval.tree = this.adaptor.rulePostProcessing(root_0);
<if(backtracking)>}<endif>
<endif>
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,513 @@
/*
[The "BSD licence"]
Copyright (c) 2006, 2007 Kay Roepke
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group AST;
@outputFile.imports() ::= <<
<@super.imports()>
<if(!TREE_PARSER)><! tree parser would already have imported !>
#import \<ANTLR/ANTLR.h><\n>
<endif>
>>
@parserHeaderFile.ivars() ::= <<
<@super.ivars()>
<parserIVars()>
>>
@parserHeaderFile.methodsdecl() ::= <<
<@super.methodsdecl()>
<parserMethodsDecl()>
>>
@genericParser.methods() ::= <<
<@super.methods()>
<parserMethods()>
>>
/** additional init code for tree support */
@genericParser.init() ::= <<
<@super.init()>
[self setTreeAdaptor:[[[ANTLRCommonTreeAdaptor alloc] init] autorelease]];
>>
@genericParser.dealloc() ::= <<
[self setTreeAdaptor:nil];
<@super.dealloc()>
>>
/** Add an adaptor property that knows how to build trees */
parserIVars() ::= <<
id\<ANTLRTreeAdaptor> treeAdaptor;
>>
/** Declaration of additional tree support methods - go in interface of parserHeaderFile() */
parserMethodsDecl() ::= <<
- (id\<ANTLRTreeAdaptor>) treeAdaptor;
- (void) setTreeAdaptor:(id\<ANTLRTreeAdaptor>)theTreeAdaptor;
>>
/** Definition of addition tree support methods - go in implementation of genericParser() */
parserMethods() ::= <<
- (id\<ANTLRTreeAdaptor>) treeAdaptor
{
return treeAdaptor;
}
- (void) setTreeAdaptor:(id\<ANTLRTreeAdaptor>)aTreeAdaptor
{
if (aTreeAdaptor != treeAdaptor) {
[aTreeAdaptor retain];
[treeAdaptor release];
treeAdaptor = aTreeAdaptor;
}
}
>>
/** addition ivars for returnscopes */
@returnScopeInterface.ivars() ::= <<
<recognizer.ASTLabelType; null="id"> tree;
>>
/** the interface of returnScope methods */
@returnScopeInterface.methods() ::= <<
- (<recognizer.ASTLabelType; null="id">) tree;
- (void) setTree:(<recognizer.ASTLabelType; null="id">)aTree;
>>
/** the implementation of returnScope methods */
@returnScope.methods() ::= <<
- (<ASTLabelType>) tree
{
return tree;
}
- (void) setTree:(<ASTLabelType>)aTree
{
if (tree != aTree) {
[aTree retain];
[tree release];
tree = aTree;
}
}
- (void) dealloc
{
[self setTree:nil];
[super dealloc];
}
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
<ASTLabelType> root_0 = nil;<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{<ASTLabelType> _<it.label.text>_tree = nil;}; separator="\n">
<ruleDescriptor.tokenListLabels:{<ASTLabelType> _<it.label.text>_tree = nil;}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{ANTLRRewriteRuleTokenStream *_stream_<it>=[[ANTLRRewriteRuleTokenStream alloc] initWithTreeAdaptor:treeAdaptor description:@"token <it>"];}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{ANTLRRewriteRuleSubtreeStream *_stream_<it>=[[ANTLRRewriteRuleSubtreeStream alloc] initWithTreeAdaptor:treeAdaptor description:@"rule <it>"];}; separator="\n">
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<[ruleDescriptor.allTokenRefsInAltsWithRewrites,ruleDescriptor.allRuleRefsInAltsWithRewrites]
:{[_stream_<it> release];}; separator="\n">
<if(ruleDescriptor.hasMultipleReturnValues)>
<if(backtracking)>
if ( ![_state isBacktracking] ) {<\n>
<endif>
[_<prevRuleRootRef()> setTree:(<ASTLabelType>)[treeAdaptor postProcessTree:root_0]];
[treeAdaptor setBoundariesForTree:[_<prevRuleRootRef()> tree] fromToken:[_<prevRuleRootRef()> start] toToken:[_<prevRuleRootRef()> stop]];
<if(backtracking)>
}
<endif>
<endif>
[root_0 release];
>>
rewriteCodeLabelsCleanUp() ::= <<
<referencedTokenLabels
:{[_stream_<it> release];};
separator="\n"
>
<referencedTokenListLabels
:{[_stream_<it> release];};
separator="\n"
>
<referencedRuleLabels
:{[_stream_<it> release];};
separator="\n"
>
<referencedRuleListLabels
:{[_stream_<it> release];};
separator="\n"
>
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
root_0 = (<ASTLabelType>)[treeAdaptor newEmptyTree];<\n>
<endif>
<endif>
>>
// T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( ![_state isBacktracking] ) <endif>[_stream_<token> addElement:_<label>];<\n>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( ![_state isBacktracking] ) <endif>[_stream_<token> addElement:_<label>];<\n>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( ![_state isBacktracking] ) <endif>[_stream_<rule.name> addElement:[_<label> tree]];
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRefRuleRoot(...)>
<if(backtracking)>if ( ![_state isBacktracking] ) <endif>[_stream_<rule.name> addElement:[_<label> tree]];<\n>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem="["+label+" tree]",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
// AST REWRITE
// elements: <referencedElementsDeep; separator=", ">
// token labels: <referencedTokenLabels; separator=", ">
// rule labels: <referencedRuleLabels; separator=", ">
// token list labels: <referencedTokenListLabels; separator=", ">
// rule list labels: <referencedRuleListLabels; separator=", ">
<if(backtracking)>
if (![_state isBacktracking]) {<\n>
<endif>
int i_0 = 0;
root_0 = (<ASTLabelType>)[treeAdaptor newEmptyTree];
[_<prevRuleRootRef()> setTree:root_0];
<rewriteCodeLabels()>
<alts:rewriteAlt(); separator="else ">
<rewriteCodeLabelsCleanUp()>
<if(backtracking)>
}
<endif>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{ANTLRRewriteRuleTokenStream *_stream_<it>=[[ANTLRRewriteRuleTokenStream alloc] initWithTreeAdaptor:treeAdaptor description:@"token <it>" element:_<it>];};
separator="\n"
>
<referencedTokenListLabels
:{ANTLRRewriteRuleTokenStream *_stream_<it>=[[ANTLRRewriteRuleTokenStream alloc] initWithTreeAdaptor:treeAdaptor description:@"token <it>" elements:_<it>_list];};
separator="\n"
>
<referencedRuleLabels
:{ANTLRRewriteRuleSubtreeStream *_stream_<it>=[[ANTLRRewriteRuleSubtreeStream alloc] initWithTreeAdaptor:treeAdaptor description:@"token <it>" element:_<it>!=nil?[_<it> tree]:nil];};
separator="\n"
>
<referencedRuleListLabels
:{ANTLRRewriteRuleSubtreeStream *_stream_<it>=[[ANTLRRewriteRuleSubtreeStream alloc] initWithTreeAdaptor:treeAdaptor description:@"token <it>" elements:_list_<it>];};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
if ( <referencedElementsDeep:{el | [_stream_<el> hasNext]}; separator="||"> ) {
<alt>
}
<referencedElementsDeep:{el | [_stream_<el> reset];<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
while ( <referencedElements:{el | [_stream_<el> hasNext]}; separator="||"> ) {
<alt>
}
<referencedElements:{el | [_stream_<el> reset];<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
// <fileName>:<description>
{
if ( !(<referencedElements:{el | [_stream_<el> hasNext]}; separator="||">) ) {
@throw [NSException exceptionWithName:@"RewriteEarlyExitException" reason:nil userInfo:nil];
}
while ( <referencedElements:{el | [_stream_<el> hasNext]}; separator="||"> ) {
<alt>
}
<referencedElements:{el | [_stream_<el> reset];<\n>}>
}
>>
rewriteAlt(a) ::= <<
// <a.description>
<if(a.pred)>
if (<a.pred>) {
<a.alt>
}<\n>
<else>
{
<a.alt>
}<\n>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = nil;"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
// <fileName>:<description>
{
<ASTLabelType> root_<treeLevel> = (<ASTLabelType>)[treeAdaptor newEmptyTree];
<root:rewriteElement()>
<children:rewriteElement()>
[treeAdaptor addChild:root_<treeLevel> toTree:root_<enclosingTreeLevel>];
[root_<treeLevel> release];
}<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
<if(args)>
id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithToken:_<token>]; // TODO: args: <args; separator=", ">
<endif>
[treeAdaptor addChild:<if(args)>_<token>_tree<else>[_stream_<token> next]<endif> toTree:root_<treeLevel>];
<if(args)>
[_<token>_tree release];<\n>
<endif>
<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
[treeAdaptor addChild:[_stream_<label> next] toTree:root_<treeLevel>];<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
[treeAdaptor addChild:[_stream_<label> next] toTree:root_<treeLevel>];<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:[_stream_<label> next] parentOf:root_<treeLevel>];<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:[_stream_<token> next] parentOf:root_<treeLevel>];<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
<if(first(rest(args)))><! got two arguments - means create from token with custom text!>
id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithToken:<first(args)> tokenType:<token> text:@<first(rest(args))>];
[treeAdaptor addChild:_<token>_tree toTree:root_<treeLevel>];
[_<token>_tree release];<\n>
<else><! at most one argument !>
<if(first(args))>
id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithToken:<first(args)> tokenType:<token>];
[treeAdaptor addChild:_<token>_tree toTree:root_<treeLevel>];
[_<token>_tree release];<\n>
<else><! no argument at all !>
id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithTokenType:<token> text:[tokenNames objectAtIndex:<token>]];
[treeAdaptor addChild:_<token>_tree toTree:root_<treeLevel>];
[_<token>_tree release];<\n>
<endif><! one arg !>
<endif><! two args !>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
<if(first(rest(args)))><! got two arguments - means create from token with custom text!>
id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithToken:<first(args)> tokenType:<token> text:@<first(rest(args))>];
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:_<token>_tree parentOf:root_<treeLevel>];
[_<token>_tree release];<\n>
<else><! at most one argument !>
<if(first(args))>
id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithToken:<first(args)> tokenType:<token>];
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:_<token>_tree parentOf:root_<treeLevel>];
[_<token>_tree release];<\n>
<else><! no argument at all !>id\<ANTLRTree> _<token>_tree = [treeAdaptor newTreeWithTokenType:<token> text:[tokenNames objectAtIndex:<token>]];
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:_<token>_tree parentOf:root_<treeLevel>];
[_<token>_tree release];<\n>
<endif><! one arg !>
<endif><! two args !>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
root_0 = <action>;<\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
[treeAdaptor addChild:[_stream_<rule> next] toTree:root_<treeLevel>];<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:(id\<ANTLRTree>)[_stream_<rule> next] parentOf:root_<treeLevel>];<\n>
>>
rewriteNodeAction(action) ::= <<
[treeAdaptor addChild:<action> toTree:root_<treeLevel>];<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:<action> parentOf:root_<treeLevel>];<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
[treeAdaptor addChild:[_<label> tree] toTree:root_<treeLevel>];<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
[treeAdaptor addChild:[(ANTLR<if(TREE_PARSER)>Tree<else>Parser<endif>RuleReturnScope *)[_stream_<label> next] tree] toTree:root_<treeLevel>];<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:[_<label> tree] parentOf:root_<treeLevel>];<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = (<ASTLabelType>)[treeAdaptor makeNode:[(ANTLR<if(TREE_PARSER)>Tree<else>Parser<endif>RuleReturnScope *)[_stream_<label> next] tree] parentOf:root_<treeLevel>];<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
new <hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
(<ASTLabelType>)adaptor.create(<tokenType>, <args; separator=", "><if(!args)>"<tokenType>"<endif>)
<endif>
>>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
new <hetero>(stream_<token>.nextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
adaptor.create(<token>, <args; separator=", ">)
<else>
stream_<token>.nextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,46 @@
/*
[The "BSD licence"]
Copyright (c) 2006 Kay Roepke
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group ASTDbg;
/*
parserMembers() ::= <<
protected TreeAdaptor adaptor =
new DebugTreeAdaptor(dbg,new CommonTreeAdaptor());
public void setTreeAdaptor(TreeAdaptor adaptor) {
this.adaptor = new DebugTreeAdaptor(dbg,adaptor);
}
public TreeAdaptor getTreeAdaptor() {
return adaptor;
}<\n>
>>
*/
@treeParserHeaderFile.superClassName ::= "ANTLRDebugTreeParser"
@rewriteElement.pregen() ::= "[debugListener locationLine:<e.line> column:<e.pos>];"

View File

@ -0,0 +1,189 @@
/*
[The "BSD licence"]
Copyright (c) 2007 Kay Roepke
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,hetero,elementIndex) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( backtracking == 0 ) {<endif>
_<label>_tree = (<ASTLabelType>)[treeAdaptor newTreeWithToken:_<label>];
[treeAdaptor addChild:_<label>_tree toTree:root_0];
[_<label>_tree release];
<if(backtracking)>}<endif>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,hetero,elementIndex) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( backtracking == 0 ) {<endif>
_<label>_tree = (<ASTLabelType>)[treeAdaptor newTreeWithToken:_<label>];
root_0 = (<ASTLabelType>)[treeAdaptor makeNode:_<label>_tree parentOf:root_0];
[_<label>_tree release];
<if(backtracking)>}<endif>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
// TODO: add support for heterogeneous trees
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={
<if(backtracking)>if (backtracking == 0) {<endif>
_<label>_tree = (<ASTLabelType>)[treeAdaptor newTreeWithToken:_<label>];
[treeAdaptor addChild:_<label>_tree toTree:root_0];
[_<label>_tree release];
<if(backtracking)>}<endif>
})>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(backtracking)>if (backtracking == 0) {<endif>
_<label>_tree = (<ASTLabelType>)[treeAdaptor newTreeWithToken:_<label>];
root_0 = (<ASTLabelType>)[treeAdaptor makeNode:_<label>_tree parentOf:root_0];
[_<label>_tree release];
<if(backtracking)>}<endif>
})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if (backtracking == 0) <endif>[treeAdaptor addChild:[_<label> tree] toTree:root_0];
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if (backtracking == 0) <endif>root_0 = (<ASTLabelType>)[treeAdaptor makeNode:[_<label> tree] parentOf:root_0];
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem="["+label+" tree]",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem="["+label+" tree]",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem="["+label+" tree]",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if (backtracking == 0) {<endif>
_<label>_tree = (<ASTLabelType>)[treeAdaptor newTreeWithToken:_<label>];
[treeAdaptor addChild:_<label>_tree toTree:root_0];
[_<label>_tree release];
<if(backtracking)>}<endif>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<if(backtracking)>if (backtracking == 0) {<endif>
_<label>_tree = (<ASTLabelType>)[treeAdaptor newTreeWithToken:_<label>];
root_0 = (<ASTLabelType>)[treeAdaptor makeNode:_<label>_tree parentOf:root_0];
[_<label>_tree release];
<if(backtracking)>}<endif>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
new <hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
(<ASTLabelType>)adaptor.create(<label>)
<endif>
>>

View File

@ -0,0 +1,129 @@
/*
[The "BSD licence"]
Copyright (c) 2007 Kay Roepke
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
group ASTTreeParser;
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
{
<ASTLabelType> root_<treeLevel> = [treeAdapator newEmptyTree];
<root:element()>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if ( [input LA:1] == ANTLRTokenTypeDOWN ) {
[self match:input tokenType:ANTLRTokenTypeDOWN follow:nil]; <checkRuleBacktrackFailure()>
<children:element()>
[self match:input tokenType:ANTLRTokenTypeUP follow:nil]; <checkRuleBacktrackFailure()>
}
<else>
[self match:input tokenType:ANTLRTokenTypeDOWN follow:nil]; <checkRuleBacktrackFailure()>
<children:element()>
[self match:input tokenType:ANTLRTokenTypeUP follow:nil]; <checkRuleBacktrackFailure()>
<endif>
[root_<treeLevel> release];
}<\n>
>>
/** What to emit when there is no rewrite. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<if(rewriteMode)>retval.tree = (<ASTLabelType>)retval.start;<endif>
>>
// TOKEN AST STUFF
/** ID auto construct */
tokenRef(token,label,elementIndex) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = (<ASTLabelType>)adaptor.dupNode(<label>);
adaptor.addChild(root_<treeLevel>, <label>_tree);
<if(backtracking)>}<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex) ::= <<
<super.tokenRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) {<endif>
<label>_tree = (<ASTLabelType>)adaptor.dupNode(<label>);
root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<label>_tree, root_<treeLevel>);
<if(backtracking)>}<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>adaptor.addChild(root_<treeLevel>, <label>.getTree());
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<if(backtracking)>if ( state.backtracking==0 ) <endif>root_<treeLevel> = (<ASTLabelType>)adaptor.becomeRoot(<label>.getTree(), root_<treeLevel>);
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".getTree()",...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
#error Heterogeneous tree support not implemented.
<if(hetero)>
new <hetero>(stream_<token>.nextNode())
<else>
stream_<token>.nextNode()
<endif>
>>

View File

@ -0,0 +1,178 @@
/*
[The "BSD licence"]
Copyright (c) 2006 Kay Roepke
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template overrides to add debugging to normal Objective-C output;
* If ASTs are built, then you'll also get ASTDbg.stg loaded.
*/
group Dbg;
@headerFile.imports() ::= <<
<@super.imports()>
#import \<ANTLR/ANTLRDebug.h>
>>
@parserHeaderFile.ivars() ::= <<
int ruleLevel;
NSArray *ruleNames;
>>
@parserHeaderFile.methodsdecl() ::= <<
-(BOOL) evalPredicate:(NSString *)predicate matched:(BOOL)result;<\n>
>>
@genericParser.init() ::= <<
ruleNames = [[NSArray alloc] initWithObjects:<rules:{rST | @"<rST.ruleName>"}; separator=", ", wrap="\n ">, nil];<\n>
>>
@genericParser.dealloc() ::= <<
[ruleNames release];<\n>
>>
@genericParser.methods() ::= <<
-(BOOL) evalPredicate:(NSString *)predicate matched:(BOOL)result
{
[debugListener semanticPredicate:predicate matched:result];
return result;
}<\n>
>>
/* bug: can't use @super.superClassName()> */
@parserHeaderFile.superClassName() ::= "ANTLRDebug<if(TREE_PARSER)>Tree<endif>Parser"
@rule.preamble() ::= <<
@try { [debugListener enterRule:@"<ruleName>"];
if ( ruleLevel==0 ) [debugListener commence];
ruleLevel++;
[debugListener locationLine:<ruleDescriptor.tree.line> column:<ruleDescriptor.tree.column>];<\n>
>>
@rule.postamble() ::= <<
[debugListener locationLine:<ruleDescriptor.EORNode.line> column:<ruleDescriptor.EORNode.column>];<\n>
}
@finally {
[debugListener exitRule:@"<ruleName>"];
ruleLevel--;
if ( ruleLevel==0 ) [debugListener terminate];
}<\n>
>>
/* these are handled in the runtime for now.
* stinks, but that's the easiest way to avoid having to generate two
* methods for each synpred
@synpred.start() ::= "[debugListener beginBacktrack:backtracking];"
@synpred.stop() ::= "[debugListener endBacktrack:backtracking wasSuccessful:success];"
*/
// Common debug event triggers used by region overrides below
enterSubRule() ::=
"@try { [debugListener enterSubRule:<decisionNumber>];<\n>"
exitSubRule() ::=
"} @finally { [debugListener exitSubRule:<decisionNumber>]; }<\n>"
enterDecision() ::=
"@try { [debugListener enterDecision:<decisionNumber>];<\n>"
exitDecision() ::=
"} @finally { [debugListener exitDecision:<decisionNumber>]; }<\n>"
enterAlt(n) ::= "[debugListener enterAlt:<n>];<\n>"
// Region overrides that tell various constructs to add debugging triggers
@block.predecision() ::= "<enterSubRule()><enterDecision()>"
@block.postdecision() ::= "<exitDecision()>"
@block.postbranch() ::= "<exitSubRule()>"
@ruleBlock.predecision() ::= "<enterDecision()>"
@ruleBlock.postdecision() ::= "<exitDecision()>"
@ruleBlockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@blockSingleAlt.prealt() ::= "<enterAlt(n=\"1\")>"
@positiveClosureBlock.preloop() ::= "<enterSubRule()>"
@positiveClosureBlock.postloop() ::= "<exitSubRule()>"
@positiveClosureBlock.predecision() ::= "<enterDecision()>"
@positiveClosureBlock.postdecision() ::= "<exitDecision()>"
@positiveClosureBlock.earlyExitException() ::=
"[debugListener recognitionException:eee];<\n>"
@closureBlock.preloop() ::= "<enterSubRule()>"
@closureBlock.postloop() ::= "<exitSubRule()>"
@closureBlock.predecision() ::= "<enterDecision()>"
@closureBlock.postdecision() ::= "<exitDecision()>"
@altSwitchCase.prealt() ::= "<enterAlt(n=i)>"
@element.prematch() ::=
"[debugListener locationLine:<it.line> column:<it.pos>];"
@matchSet.mismatchedSetException() ::=
"[debugListener recognitionException:mse];"
@dfaState.noViableAltException() ::= "[debugListener recognitionException:nvae];"
@dfaStateSwitch.noViableAltException() ::= "[debugListener recognitionException:nvae];"
dfaDecision(decisionNumber,description) ::= <<
@try {
// isCyclicDecision is only necessary for the Profiler. Which I didn't do, yet.
// isCyclicDecision = YES;
<super.dfaDecision(...)>
}
@catch (ANTLRNoViableAltException *nvae) {
[debugListener recognitionException:nvae];
@throw nvae;
}
>>
@cyclicDFA.errorMethod() ::= <<
-(void) error:(ANTLRNoViableAltException *)nvae
{
[[recognizer debugListener] recognitionException:nvae];
}
>>
/** Force predicate validation to trigger an event */
evalPredicate(pred,description) ::= <<
[self evalPredicate:@"<description>" result:<pred>];
>>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,440 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* in sync with Java/AST.stg revision 36 */
group AST;
finishedBacktracking(block) ::= <<
<if(backtracking)>
if self._state.backtracking == 0:
<block>
<else>
<block>
<endif>
>>
@outputFile.imports() ::= <<
<@super.imports()>
<if(!TREE_PARSER)><! tree parser would already have imported !>
from antlr3.tree import *<\n>
<endif>
>>
/** Add an adaptor property that knows how to build trees */
@genericParser.init() ::= <<
<@super.init()>
self._adaptor = CommonTreeAdaptor()
>>
@genericParser.members() ::= <<
<@super.members()>
def getTreeAdaptor(self):
return self._adaptor
def setTreeAdaptor(self, adaptor):
self._adaptor = adaptor
<grammar.directDelegates:{g|<g:delegateName()>.adaptor = adaptor}; separator="\n">
adaptor = property(getTreeAdaptor, setTreeAdaptor)
>>
@returnScope.ruleReturnInit() ::= <<
self.tree = None
>>
/** Add a variable to track rule's return AST */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
root_0 = None<\n>
>>
ruleLabelDefs() ::= <<
<super.ruleLabelDefs()>
<ruleDescriptor.tokenLabels:{<it.label.text>_tree = None}; separator="\n">
<ruleDescriptor.tokenListLabels:{<it.label.text>_tree = None}; separator="\n">
<ruleDescriptor.allTokenRefsInAltsWithRewrites
:{stream_<it> = RewriteRule<rewriteElementType>Stream(self._adaptor, "token <it>")}; separator="\n">
<ruleDescriptor.allRuleRefsInAltsWithRewrites
:{stream_<it> = RewriteRuleSubtreeStream(self._adaptor, "rule <it>")}; separator="\n">
>>
/** When doing auto AST construction, we must define some variables;
* These should be turned off if doing rewrites. This must be a "mode"
* as a rule could have both rewrite and AST within the same alternative
* block.
*/
@alt.declarations() ::= <<
<if(autoAST)>
<if(outerAlt)>
<if(!rewriteMode)>
root_0 = self._adaptor.nil()<\n>
<endif>
<endif>
<endif>
>>
// // T r a c k i n g R u l e E l e m e n t s
/** ID and track it for use in a rewrite rule */
tokenRefTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)> <! Track implies no auto AST construction!>
<finishedBacktracking({stream_<token>.add(<label>)})>
>>
/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
* to the tracking list stream_ID for use in the rewrite.
*/
tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefTrack(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) track for rewrite */
tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<finishedBacktracking({stream_<token>.add(<label>)})>
>>
/** Match ^(label+=TOKEN ...) track for rewrite */
tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRootTrack(...)>
<listLabel(elem=label,...)>
>>
wildcardTrack(label,elementIndex) ::= <<
<super.wildcard(...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<finishedBacktracking({stream_<rule.name>.add(<label>.tree)})>
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefTrack(...)>
<listLabel(elem=label+".tree",...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<finishedBacktracking({stream_<rule.name>.add(<label>.tree)})>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRootTrack(...)>
<listLabel(elem=label+".tree",...)>
>>
// R e w r i t e
rewriteCode(
alts, description,
referencedElementsDeep, // ALL referenced elements to right of ->
referencedTokenLabels,
referencedTokenListLabels,
referencedRuleLabels,
referencedRuleListLabels,
rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
<<
# AST Rewrite
# elements: <referencedElementsDeep; separator=", ">
# token labels: <referencedTokenLabels; separator=", ">
# rule labels: <referencedRuleLabels; separator=", ">
# token list labels: <referencedTokenListLabels; separator=", ">
# rule list labels: <referencedRuleListLabels; separator=", ">
<finishedBacktracking({
<prevRuleRootRef()>.tree = root_0
<rewriteCodeLabels()>
root_0 = self._adaptor.nil()
<first(alts):rewriteAltFirst(); anchor>
<rest(alts):{a | el<rewriteAltRest(a)>}; anchor, separator="\n\n">
<! if tree parser and rewrite=true !>
<if(TREE_PARSER)>
<if(rewriteMode)>
<prevRuleRootRef()>.tree = self._adaptor.rulePostProcessing(root_0)
self.input.replaceChildren(
self._adaptor.getParent(retval.start),
self._adaptor.getChildIndex(retval.start),
self._adaptor.getChildIndex(_last),
retval.tree
)<\n>
<endif>
<endif>
<! if parser or tree-parser and rewrite!=true, we need to set result !>
<if(!TREE_PARSER)>
<prevRuleRootRef()>.tree = root_0<\n>
<else>
<if(!rewriteMode)>
<prevRuleRootRef()>.tree = root_0<\n>
<endif>
<endif>
})>
>>
rewriteCodeLabels() ::= <<
<referencedTokenLabels
:{stream_<it> = RewriteRule<rewriteElementType>Stream(self._adaptor, "token <it>", <it>)};
separator="\n"
>
<referencedTokenListLabels
:{stream_<it> = RewriteRule<rewriteElementType>Stream(self._adaptor, "token <it>", list_<it>)};
separator="\n"
>
<referencedRuleLabels
:{
if <it> is not None:
stream_<it> = RewriteRuleSubtreeStream(self._adaptor, "token <it>", <it>.tree)
else:
stream_<it> = RewriteRuleSubtreeStream(self._adaptor, "token <it>", None)
};
separator="\n"
>
<referencedRuleListLabels
:{stream_<it> = RewriteRuleSubtreeStream(self._adaptor, "token <it>", list_<it>)};
separator="\n"
>
>>
/** Generate code for an optional rewrite block; note it uses the deep ref'd element
* list rather shallow like other blocks.
*/
rewriteOptionalBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
# <fileName>:<description>
if <referencedElementsDeep:{el | stream_<el>.hasNext()}; separator=" or ">:
<alt>
<referencedElementsDeep:{el | stream_<el>.reset();<\n>}>
>>
rewriteClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
# <fileName>:<description>
while <referencedElements:{el | stream_<el>.hasNext()}; separator=" or ">:
<alt>
<referencedElements:{el | stream_<el>.reset();<\n>}>
>>
rewritePositiveClosureBlock(
alt,rewriteBlockLevel,
referencedElementsDeep, // all nested refs
referencedElements, // elements in immediately block; no nested blocks
description) ::=
<<
# <fileName>:<description>
if not (<referencedElements:{el | stream_<el>.hasNext()}; separator=" or ">):
raise RewriteEarlyExitException()
while <referencedElements:{el | stream_<el>.hasNext()}; separator=" or ">:
<alt>
<referencedElements:{el | stream_<el>.reset()<\n>}>
>>
rewriteAltRest(a) ::= <<
<if(a.pred)>
if <a.pred>:
# <a.description>
<a.alt>
<else>
se: <! little hack to get if .. elif .. else block right !>
# <a.description>
<a.alt>
<endif>
>>
rewriteAltFirst(a) ::= <<
<if(a.pred)>
if <a.pred>:
# <a.description>
<a.alt>
<else>
# <a.description>
<a.alt>
<endif>
>>
/** For empty rewrites: "r : ... -> ;" */
rewriteEmptyAlt() ::= "root_0 = None"
rewriteTree(root,children,description,enclosingTreeLevel,treeLevel) ::= <<
# <fileName>:<description>
root_<treeLevel> = self._adaptor.nil()
<root:rewriteElement()>
<children:rewriteElement()>
self._adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>)<\n>
>>
rewriteElementList(elements) ::= "<elements:rewriteElement()>"
rewriteElement(e) ::= <<
<@pregen()>
<e.el>
>>
/** Gen ID or ID[args] */
rewriteTokenRef(token,elementIndex,hetero,args) ::= <<
self._adaptor.addChild(root_<treeLevel>, <createRewriteNodeFromElement(...)>)<\n>
>>
/** Gen $label ... where defined via label=ID */
rewriteTokenLabelRef(label,elementIndex) ::= <<
self._adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode())<\n>
>>
/** Gen $label ... where defined via label+=ID */
rewriteTokenListLabelRef(label,elementIndex) ::= <<
self._adaptor.addChild(root_<treeLevel>, stream_<label>.nextNode())<\n>
>>
/** Gen ^($label ...) */
rewriteTokenLabelRefRoot(label,elementIndex) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>)<\n>
>>
/** Gen ^($label ...) where label+=... */
rewriteTokenListLabelRefRoot ::= rewriteTokenLabelRefRoot
/** Gen ^(ID ...) or ^(ID[args] ...) */
rewriteTokenRefRoot(token,elementIndex,hetero,args) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(<createRewriteNodeFromElement(...)>, root_<treeLevel>)<\n>
>>
rewriteImaginaryTokenRef(args,token,hetero,elementIndex) ::= <<
self._adaptor.addChild(root_<treeLevel>, <createImaginaryNode(tokenType=token, ...)>)<\n>
>>
rewriteImaginaryTokenRefRoot(args,token,hetero,elementIndex) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(<createImaginaryNode(tokenType=token, ...)>, root_<treeLevel>)<\n>
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
<!FIXME(96,untested)!>
root_0 = <action><\n>
>>
/** What is the name of the previous value of this rule's root tree? This
* let's us refer to $rule to mean previous value. I am reusing the
* variable 'tree' sitting in retval struct to hold the value of root_0 right
* before I set it during rewrites. The assign will be to retval.tree.
*/
prevRuleRootRef() ::= "retval"
rewriteRuleRef(rule) ::= <<
self._adaptor.addChild(root_<treeLevel>, stream_<rule>.nextTree())<\n>
>>
rewriteRuleRefRoot(rule) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(stream_<rule>.nextNode(), root_<treeLevel>)<\n>
>>
rewriteNodeAction(action) ::= <<
self._adaptor.addChild(root_<treeLevel>, <action>)<\n>
>>
rewriteNodeActionRoot(action) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(<action>, root_<treeLevel>)<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel=rule */
rewriteRuleLabelRef(label) ::= <<
self._adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree())<\n>
>>
/** Gen $ruleLabel ... where defined via ruleLabel+=rule */
rewriteRuleListLabelRef(label) ::= <<
self._adaptor.addChild(root_<treeLevel>, stream_<label>.nextTree())<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel=rule */
rewriteRuleLabelRefRoot(label) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>)<\n>
>>
/** Gen ^($ruleLabel ...) where ruleLabel+=rule */
rewriteRuleListLabelRefRoot(label) ::= <<
root_<treeLevel> = self._adaptor.becomeRoot(stream_<label>.nextNode(), root_<treeLevel>)<\n>
>>
createImaginaryNode(tokenType,hetero,args) ::= <<
<if(hetero)>
<! new MethodNode(IDLabel, args) !>
<hetero>(<tokenType><if(args)>, <args; separator=", "><endif>)
<else>
<if (!args)>self._adaptor.createFromType(<tokenType>, "<tokenType>")
<else>self._adaptor.create(<tokenType>, <args; separator=", ">)
<endif>
<endif>
>>
//<! need to call different adaptor.create*() methods depending of argument count !>
//<if (!args)>self._adaptor.createFromType(<tokenType>, "<tokenType>")
//<else><if (!rest(args))>self._adaptor.createFromType(<tokenType>, <first(args)>)
//<else><if (!rest(rest(args)))>self._adaptor.createFromToken(<tokenType>, <first(args)>, <first(rest(args))>)
//<endif>
//<endif>
//<endif>
createRewriteNodeFromElement(token,hetero,args) ::= <<
<if(hetero)>
<hetero>(stream_<token>.nextToken()<if(args)>, <args; separator=", "><endif>)
<else>
<if(args)> <! must create new node from old !>
<! need to call different adaptor.create*() methods depending of argument count !>
<if (!args)>self._adaptor.createFromType(<token>, "<token>")
<else><if (!rest(args))>self._adaptor.createFromToken(<token>, <first(args)>)
<else><if (!rest(rest(args)))>self._adaptor.createFromToken(<token>, <first(args)>, <first(rest(args))>)
<endif>
<endif>
<endif>
<else>
stream_<token>.nextNode()
<endif>
<endif>
>>

View File

@ -0,0 +1,198 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during normal parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* The situation is not too bad as rewrite (->) usage makes ^ and !
* invalid. There is no huge explosion of combinations.
*/
group ASTParser;
finishedBacktracking(block) ::= <<
<if(backtracking)>
if self._state.backtracking == 0:
<block>
<else>
<block>
<endif>
>>
@rule.setErrorReturnValue() ::= <<
retval.tree = self._adaptor.errorNode(self.input, retval.start, self.input.LT(-1), re)
>>
// TOKEN AST STUFF
/** ID and output=AST */
tokenRef(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<finishedBacktracking({
<label>_tree = <createNodeFromToken(...)>
self._adaptor.addChild(root_0, <label>_tree)
})>
>>
/** ID! and output=AST (same as plain tokenRef) */
tokenRefBang(token,label,elementIndex) ::= "<super.tokenRef(...)>"
/** ID^ and output=AST */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
<super.tokenRef(...)>
<finishedBacktracking({
<label>_tree = <createNodeFromToken(...)>
root_0 = self._adaptor.becomeRoot(<label>_tree, root_0)
})>
>>
/** ids+=ID! and output=AST */
tokenRefBangAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefBang(...)>
<listLabel(elem=label,...)>
>>
/** label+=TOKEN when output=AST but not rewrite alt */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** Match label+=TOKEN^ when output=AST but not rewrite alt */
tokenRefRuleRootAndListLabel(token,label,hetero,elementIndex) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
// the match set stuff is interesting in that it uses an argument list
// to pass code to the default matchSet; another possible way to alter
// inherited code. I don't use the region stuff because I need to pass
// different chunks depending on the operator. I don't like making
// the template name have the operator as the number of templates gets
// large but this is the most flexible--this is as opposed to having
// the code generator call matchSet then add root code or ruleroot code
// plus list label plus ... The combinations might require complicated
// rather than just added on code. Investigate that refactoring when
// I have more time.
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
<super.matchSet(..., postmatchCode={<finishedBacktracking({self._adaptor.addChild(root_0, <createNodeFromToken(...)>)})>})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= "<super.matchSet(...)>"
// note there is no matchSetTrack because -> rewrites force sets to be
// plain old blocks of alts: (A|B|...|C)
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<if(label)>
<label> = self.input.LT(1)<\n>
<endif>
<super.matchSet(..., postmatchCode={<finishedBacktracking({root_0 = self._adaptor.becomeRoot(<createNodeFromToken(...)>, root_0)})>})>
>>
// RULE REF AST
/** rule when output=AST */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<finishedBacktracking({self._adaptor.addChild(root_0, <label>.tree)})>
>>
/** rule! is same as normal rule ref */
ruleRefBang(rule,label,elementIndex,args,scope) ::= "<super.ruleRef(...)>"
/** rule^ */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
<super.ruleRef(...)>
<finishedBacktracking({root_0 = self._adaptor.becomeRoot(<label>.tree, root_0)})>
>>
/** x+=rule when output=AST */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".tree",...)>
>>
/** x+=rule! when output=AST is a rule ref with list addition */
ruleRefBangAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefBang(...)>
<listLabel(elem=label+".tree",...)>
>>
/** x+=rule^ */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".tree",...)>
>>
// WILDCARD AST
wildcard(label,elementIndex) ::= <<
<super.wildcard(...)>
<finishedBacktracking({
<label>_tree = self._adaptor.createWithPayload(<label>)
self._adaptor.addChild(root_0, <label>_tree)
})>
>>
wildcardBang(label,elementIndex) ::= "<super.wildcard(...)>"
wildcardRuleRoot(label,elementIndex) ::= <<
<super.wildcard(...)>
<finishedBacktracking({
<label>_tree = self._adaptor.createWithPayload(<label>)
root_0 = self._adaptor.becomeRoot(<label>_tree, root_0)
})>
>>
createNodeFromToken(label,hetero) ::= <<
<if(hetero)>
<hetero>(<label>) <! new MethodNode(IDLabel) !>
<else>
self._adaptor.createWithPayload(<label>)
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<finishedBacktracking({
retval.tree = self._adaptor.rulePostProcessing(root_0)
self._adaptor.setTokenBoundaries(retval.tree, retval.start, retval.stop)
})>
>>

View File

@ -0,0 +1,295 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Templates for building ASTs during tree parsing.
*
* Deal with many combinations. Dimensions are:
* Auto build or rewrite
* no label, label, list label (label/no-label handled together)
* child, root
* token, set, rule, wildcard
*
* Each combination has its own template except that label/no label
* is combined into tokenRef, ruleRef, ...
*/
group ASTTreeParser;
finishedBacktracking(block) ::= <<
<if(backtracking)>
if self._state.backtracking == 0:
<block>
<else>
<block>
<endif>
>>
/** Add a variable to track last element matched */
ruleDeclarations() ::= <<
<super.ruleDeclarations()>
_first_0 = None
_last = None<\n>
>>
/** What to emit when there is no rewrite rule. For auto build
* mode, does nothing.
*/
noRewrite(rewriteBlockLevel, treeLevel) ::= <<
<finishedBacktracking({
<if(rewriteMode)>
retval.tree = _first_0
if self._adaptor.getParent(retval.tree) is not None and self._adaptor.isNil(self._adaptor.getParent(retval.tree)):
retval.tree = self._adaptor.getParent(retval.tree)
<endif>
})>
>>
/** match ^(root children) in tree parser; override here to
* add tree construction actions.
*/
tree(root, actionsAfterRoot, children, nullableChildList,
enclosingTreeLevel, treeLevel) ::= <<
_last = self.input.LT(1)
_save_last_<treeLevel> = _last
_first_<treeLevel> = None
<if(!rewriteMode)>
root_<treeLevel> = self._adaptor.nil()<\n>
<endif>
<root:element()>
<if(rewriteMode)>
<finishedBacktracking({
<if(root.el.rule)>
if _first_<enclosingTreeLevel> is None:
_first_<enclosingTreeLevel> = <root.el.label>.tree<\n>
<else>
if _first_<enclosingTreeLevel> is None:
_first_<enclosingTreeLevel> = <root.el.label><\n>
<endif>
})>
<endif>
<actionsAfterRoot:element()>
<if(nullableChildList)>
if self.input.LA(1) == DOWN:
self.match(self.input, DOWN, None)
<children:element()>
self.match(self.input, UP, None)
<else>
self.match(self.input, DOWN, None)
<children:element()>
self.match(self.input, UP, None)<\n>
<endif>
<if(!rewriteMode)>
self._adaptor.addChild(root_<enclosingTreeLevel>, root_<treeLevel>)<\n>
<endif>
_last = _save_last_<treeLevel>
>>
// TOKEN AST STUFF
/** ID! and output=AST (same as plain tokenRef) 'cept add
* setting of _last
*/
tokenRefBang(token,label,elementIndex) ::= <<
_last = self.input.LT(1)
<super.tokenRef(...)>
>>
/** ID auto construct */
tokenRef(token,label,elementIndex,hetero) ::= <<
_last = self.input.LT(1)
<super.tokenRef(...)>
<if(!rewriteMode)>
<finishedBacktracking({
<if(hetero)>
<label>_tree = <hetero>(<label>)
<else>
<label>_tree = self._adaptor.dupNode(<label>)
<endif><\n>
self._adaptor.addChild(root_<treeLevel>, <label>_tree)
})>
<else> <! rewrite mode !>
<finishedBacktracking({
if _first_<treeLevel> is None:
_first_<treeLevel> = <label><\n>
})>
<endif>
>>
/** label+=TOKEN auto construct */
tokenRefAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRef(...)>
<listLabel(elem=label,...)>
>>
/** ^(ID ...) auto construct */
tokenRefRuleRoot(token,label,elementIndex,hetero) ::= <<
_last = self.input.LT(1)
<super.tokenRef(...)>
<if(!rewriteMode)>
<finishedBacktracking({
<if(hetero)>
<label>_tree = <hetero>(<label>)
<else>
<label>_tree = self._adaptor.dupNode(<label>)
<endif><\n>
root_<treeLevel> = self._adaptor.becomeRoot(<label>_tree, root_<treeLevel>)
})>
<endif>
>>
/** Match ^(label+=TOKEN ...) auto construct */
tokenRefRuleRootAndListLabel(token,label,elementIndex,hetero) ::= <<
<tokenRefRuleRoot(...)>
<listLabel(elem=label,...)>
>>
// SET AST
matchSet(s,label,hetero,elementIndex,postmatchCode) ::= <<
_last = self.input.LT(1)
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<finishedBacktracking({
<if(hetero)>
<label>_tree = <hetero>(<label>)
<else>
<label>_tree = self._adaptor.dupNode(<label>)
<endif><\n>
self._adaptor.addChild(root_<treeLevel>, <label>_tree)
})>
<endif>
})>
>>
matchRuleBlockSet(s,label,hetero,elementIndex,postmatchCode,treeLevel="0") ::= <<
<matchSet(...)>
<noRewrite()> <! set return tree !>
>>
matchSetBang(s,label,elementIndex,postmatchCode) ::= <<
_last = self.input.LT(1)
<super.matchSet(...)>
>>
matchSetRuleRoot(s,label,hetero,elementIndex,debug) ::= <<
<super.matchSet(..., postmatchCode={
<if(!rewriteMode)>
<finishedBacktracking({
<if(hetero)>
<label>_tree = <hetero>(<label>)
<else>
<label>_tree = self._adaptor.dupNode(<label>)
<endif><\n>
root_<treeLevel> = self._adaptor.becomeRoot(<label>_tree, root_<treeLevel>)
})>
<endif>
})>
>>
// RULE REF AST
/** rule auto construct */
ruleRef(rule,label,elementIndex,args,scope) ::= <<
_last = self.input.LT(1)
<super.ruleRef(...)>
<finishedBacktracking({
<if(!rewriteMode)>
self._adaptor.addChild(root_<treeLevel>, <label>.tree)
<else> <! rewrite mode !>
if _first_<treeLevel> is None:
_first_<treeLevel> = <label>.tree<\n>
<endif>
})>
>>
/** x+=rule auto construct */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".tree",...)>
>>
/** ^(rule ...) auto construct */
ruleRefRuleRoot(rule,label,elementIndex,args,scope) ::= <<
_last = self.input.LT(1)
<super.ruleRef(...)>
<if(!rewriteMode)>
<finishedBacktracking({
root_<treeLevel> = self._adaptor.becomeRoot(<label>.tree, root_<treeLevel>)
})>
<endif>
>>
/** ^(x+=rule ...) auto construct */
ruleRefRuleRootAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRefRuleRoot(...)>
<listLabel(elem=label+".tree",...)>
>>
/** rule when output=AST and tracking for rewrite */
ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
_last = self.input.LT(1)
<super.ruleRefTrack(...)>
>>
/** x+=rule when output=AST and tracking for rewrite */
ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = self.input.LT(1)
<super.ruleRefTrackAndListLabel(...)>
>>
/** ^(rule ...) rewrite */
ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
_last = self.input.LT(1)
<super.ruleRefRootTrack(...)>
>>
/** ^(x+=rule ...) rewrite */
ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
_last = self.input.LT(1)
<super.ruleRefRuleRootTrackAndListLabel(...)>
>>
/** Streams for token refs are tree nodes now; override to
* change nextToken to nextNode.
*/
createRewriteNodeFromElement(token,hetero,scope) ::= <<
<if(hetero)>
<hetero>(stream_<token>.nextNode())
<else>
stream_<token>.nextNode()
<endif>
>>
ruleCleanUp() ::= <<
<super.ruleCleanUp()>
<if(!rewriteMode)>
<finishedBacktracking({
retval.tree = self._adaptor.rulePostProcessing(root_0)
})>
<endif>
>>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,181 @@
/*
[The "BSD licence"]
Copyright (c) 2005-2006 Terence Parr
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/** Template subgroup to add template rewrite output
* If debugging, then you'll also get STDbg.stg loaded.
*/
group ST;
@outputFile.imports() ::= <<
<@super.imports()>
import stringtemplate3
>>
/** Add this to each rule's return value struct */
@returnScope.ruleReturnInit() ::= <<
self.st = None
>>
@returnScope.ruleReturnMembers() ::= <<
def getTemplate(self):
return self.st
def toString(self):
if self.st is not None:
return self.st.toString()
return None
__str__ = toString
>>
@genericParser.init() ::= <<
<@super.init()>
self.templateLib = stringtemplate3.StringTemplateGroup(
'<name>Templates', lexer='angle-bracket'
)
>>
@genericParser.members() ::= <<
<@super.members()>
def setTemplateLib(self, templateLib):
self.templateLib = templateLib
def getTemplateLib(self):
return self.templateLib
>>
/** x+=rule when output=template */
ruleRefAndListLabel(rule,label,elementIndex,args,scope) ::= <<
<ruleRef(...)>
<listLabel(elem=label+".st",...)>
>>
rewriteTemplate(alts) ::= <<
# TEMPLATE REWRITE
<if(backtracking)>
if self._state.backtracking == 0:
<first(alts):rewriteTemplateAltFirst()>
<rest(alts):{el<rewriteTemplateAlt()>}>
<if(rewriteMode)><replaceTextInLine()><endif>
<else>
<first(alts):rewriteTemplateAltFirst()>
<rest(alts):{el<rewriteTemplateAlt()>}>
<if(rewriteMode)><replaceTextInLine()><endif>
<endif>
>>
replaceTextInLine() ::= <<
<if(TREE_PARSER)>
self.input.getTokenStream().replace(
self.input.getTreeAdaptor().getTokenStartIndex(retval.start),
self.input.getTreeAdaptor().getTokenStopIndex(retval.start),
retval.st
)
<else>
self.input.replace(
retval.start.getTokenIndex(),
self.input.LT(-1).getTokenIndex(),
retval.st
)
<endif>
>>
rewriteTemplateAltFirst() ::= <<
<if(it.pred)>
if <it.pred>:
# <it.description>
retval.st = <it.alt>
<\n>
<else>
# <it.description>
retval.st = <it.alt>
<\n>
<endif>
>>
rewriteTemplateAlt() ::= <<
<if(it.pred)>
if <it.pred>:
# <it.description>
retval.st = <it.alt>
<\n>
<else>
se:
# <it.description>
retval.st = <it.alt>
<\n>
<endif>
>>
rewriteEmptyTemplate(alts) ::= <<
None
>>
/** Invoke a template with a set of attribute name/value pairs.
* Set the value of the rule's template *after* having set
* the attributes because the rule's template might be used as
* an attribute to build a bigger template; you get a self-embedded
* template.
*/
rewriteExternalTemplate(name,args) ::= <<
self.templateLib.getInstanceOf("<name>"<if(args)>,
attributes={<args:{a | "<a.name>": <a.value>}; separator=", ">}<endif>)
>>
/** expr is a string expression that says what template to load */
rewriteIndirectTemplate(expr,args) ::= <<
self.templateLib.getInstanceOf(<expr><if(args)>,
attributes={<args:{a | "<a.name>": <a.value>}; separator=", ">}<endif>)
>>
/** Invoke an inline template with a set of attribute name/value pairs */
rewriteInlineTemplate(args, template) ::= <<
stringtemplate3.StringTemplate(
"<template>",
group=self.templateLib<if(args)>,
attributes={<args:{a | "<a.name>": <a.value>}; separator=", ">}
<endif>
)
>>
/** plain -> {foo} action */
rewriteAction(action) ::= <<
<action>
>>
/** An action has %st.attrName=expr; or %{st}.attrName=expr; */
actionSetAttribute(st,attrName,expr) ::= <<
(<st>)["<attrName>"] = <expr>
>>
/** Translate %{stringExpr} */
actionStringConstructor(stringExpr) ::= <<
stringtemplate3.StringTemplate(<stringExpr>, group=self.templateLib)
>>

Some files were not shown because too many files have changed in this diff Show More