Expand description
lrpar
provides a Yacc-compatible parser (where grammars can be generated at compile-time or
run-time). It can take in traditional .y
files and convert them into an idiomatic Rust
parser.
If you’re new to lrpar
, please read the “quick start guide”. The “grmtools book” and API
reference have more detailed information. You can find the appropriate documentation for the
version of lrpar you are using here:
Latest release | master |
---|---|
Quickstart guide | Quickstart guide |
grmtools book | grmtools book |
lrpar API | lrpar API |
Documentation for all past and present releases
§Example
Let’s assume we want to statically generate a parser for a simple calculator language (and
let’s also assume we are able to use lrlex
for the lexer).
We need to add a build.rs
file to our project which statically compiles both the lexer and
parser. While we can perform both steps individually, it’s easiest to use lrlex
which does
both jobs for us in one go. Our build.rs
file thus looks as follows:
use cfgrammar::yacc::YaccKind;
use lrlex::CTLexerBuilder;
fn main() {
CTLexerBuilder::new()
.lrpar_config(|ctp| {
ctp.yacckind(YaccKind::Grmtools)
.grammar_in_src_dir("calc.y")
.unwrap()
})
.lexer_in_src_dir("calc.l")
.unwrap()
.build()
.unwrap();
}
where src/calc.l
is as follows:
%%
[0-9]+ "INT"
\+ "+"
\* "*"
\( "("
\) ")"
[\t ]+ ;
and src/calc.y
is as follows:
%start Expr
%avoid_insert "INT"
%%
Expr -> Result<u64, ()>:
Expr '+' Term { Ok($1? + $3?) }
| Term { $1 }
;
Term -> Result<u64, ()>:
Term '*' Factor { Ok($1? * $3?) }
| Factor { $1 }
;
Factor -> Result<u64, ()>:
'(' Expr ')' { $2 }
| 'INT'
{
let v = $1.map_err(|_| ())?;
parse_int($lexer.span_str(v.span()))
}
;
%%
// Any functions here are in scope for all the grammar actions above.
fn parse_int(s: &str) -> Result<u64, ()> {
match s.parse::<u64>() {
Ok(val) => Ok(val),
Err(_) => {
eprintln!("{} cannot be represented as a u64", s);
Err(())
}
}
}
Because we specified that our Yacc file is in Grmtools
format, each rule has a
separate Rust type to which all its functions conform (in this case, all the
rules have the same type, but that’s not a requirement).
A simple src/main.rs
is as follows:
use std::io::{self, BufRead, Write};
use lrlex::lrlex_mod;
use lrpar::lrpar_mod;
// Using `lrlex_mod!` brings the lexer for `calc.l` into scope. By default the module name
// will be `calc_l` (i.e. the file name, minus any extensions, with a suffix of `_l`).
lrlex_mod!("calc.l");
// Using `lrpar_mod!` brings the parser for `calc.y` into scope. By default the module name
// will be `calc_y` (i.e. the file name, minus any extensions, with a suffix of `_y`).
lrpar_mod!("calc.y");
fn main() {
// Get the `LexerDef` for the `calc` language.
let lexerdef = calc_l::lexerdef();
let stdin = io::stdin();
loop {
print!(">>> ");
io::stdout().flush().ok();
match stdin.lock().lines().next() {
Some(Ok(ref l)) => {
if l.trim().is_empty() {
continue;
}
// Now we create a lexer with the `lexer` method with which we can lex an input.
let lexer = lexerdef.lexer(l);
// Pass the lexer to the parser and lex and parse the input.
let (res, errs) = calc_y::parse(&lexer);
for e in errs {
println!("{}", e.pp(&lexer, &calc_y::token_epp));
}
match res {
Some(Ok(r)) => println!("Result: {}", r),
_ => eprintln!("Unable to evaluate expression.")
}
}
_ => break
}
}
}
We can now cargo run
our project and evaluate simple expressions:
>>> 2 + 3
Result: 5
>>> 2 + 3 * 4
Result: 14
>>> (2 + 3) * 4
Result: 20
lrpar
also comes with advanced error
recovery built-in:
>>> 2 + + 3
Parsing error at line 1 column 5. Repair sequences found:
1: Delete +
2: Insert INT
Result: 5
>>> 2 + 3 3
Parsing error at line 1 column 7. Repair sequences found:
1: Insert *
2: Insert +
3: Delete 3
Result: 11
>>> 2 + 3 4 5
Parsing error at line 1 column 7. Repair sequences found:
1: Insert *, Delete 4
2: Insert +, Delete 4
3: Delete 4, Delete 5
4: Insert +, Shift 4, Delete 5
5: Insert +, Shift 4, Insert +
6: Insert *, Shift 4, Delete 5
7: Insert *, Shift 4, Insert *
8: Insert *, Shift 4, Insert +
9: Insert +, Shift 4, Insert *
Result: 17
Modules§
- A module for lifting restrictions on visibility by enabling unstable features.
Macros§
- A convenience macro for including statically compiled
.y
files. A filesrc/a/b/c.y
processed by CTParserBuilder::grammar_in_src_dir can then be used in a crate withlrpar_mod!("a/b/c.y")
.
Structs§
- An interface to the result of CTParserBuilder::build().
- A
CTParserBuilder
allows one to specify the criteria for building a statically generated parser. - Records a single parse error.
- A run-time parser builder.
- A
Span
records what portion of the user’s input something (e.g. a lexeme or production) references (i.e. theSpan
doesn’t hold a reference / copy of the actual input).
Enums§
- A lexing or parsing error. Although the two are quite distinct in terms of what can be reported to users, both can (at least conceptually) occur at any point of the intertwined lexing/parsing process.
- A generic parse tree.
- After a parse error is encountered, the parser attempts to find a way of recovering. Each entry in the sequence of repairs is represented by a
ParseRepair
. - What recovery algorithm should be used when a syntax error is encountered?
- Specifies the Rust Edition that will be emitted during code generation.
- Specify the visibility of the module generated by
CTBuilder
.
Traits§
- A lexing error.
- A lexeme represents a segment of the user’s input that conforms to a known type: this trait captures the common behaviour of all lexeme structs.
- The base trait which all lexers which want to interact with
lrpar
must implement. - A
NonStreamingLexer
is one that takes input in one go, and is then able to hand out substrings to that input and calculate line and column numbers from a Span.
Functions§
- The action which implements
cfgrammar::yacc::YaccOriginalActionKind::GenericParseTree
. Usually you should just use the action kind directly. But you can also call this from within a custom action to return a generic parse tree with custom behavior.