Trait lrlex::LexerDef

source ·
pub trait LexerDef<LexerTypesT: LexerTypes>
where usize: AsPrimitive<LexerTypesT::StorageT>,
{ // Required methods fn from_str(s: &str) -> LexBuildResult<Self> where Self: Sized; fn get_rule(&self, idx: usize) -> Option<&Rule<LexerTypesT::StorageT>>; fn get_rule_by_id( &self, tok_id: LexerTypesT::StorageT ) -> &Rule<LexerTypesT::StorageT>; fn get_rule_by_name(&self, n: &str) -> Option<&Rule<LexerTypesT::StorageT>>; fn set_rule_ids<'a>( &'a mut self, rule_ids_map: &HashMap<&'a str, LexerTypesT::StorageT> ) -> (Option<HashSet<&'a str>>, Option<HashSet<&'a str>>); fn set_rule_ids_spanned<'a>( &'a mut self, rule_ids_map: &HashMap<&'a str, LexerTypesT::StorageT> ) -> (Option<HashSet<&'a str>>, Option<HashSet<(&'a str, Span)>>); fn iter_rules(&self) -> Iter<'_, Rule<LexerTypesT::StorageT>>; fn iter_start_states(&self) -> Iter<'_, StartState>; }
Expand description

Methods which all lexer definitions must implement.

Required Methods§

source

fn from_str(s: &str) -> LexBuildResult<Self>
where Self: Sized,

Instantiate a lexer from a string (e.g. representing a .l file).

source

fn get_rule(&self, idx: usize) -> Option<&Rule<LexerTypesT::StorageT>>

Get the Rule at index idx.

source

fn get_rule_by_id( &self, tok_id: LexerTypesT::StorageT ) -> &Rule<LexerTypesT::StorageT>

Get the Rule instance associated with a particular lexeme ID. Panics if no such rule exists.

source

fn get_rule_by_name(&self, n: &str) -> Option<&Rule<LexerTypesT::StorageT>>

Get the Rule instance associated with a particular name.

source

fn set_rule_ids<'a>( &'a mut self, rule_ids_map: &HashMap<&'a str, LexerTypesT::StorageT> ) -> (Option<HashSet<&'a str>>, Option<HashSet<&'a str>>)

Set the id attribute on rules to the corresponding value in map. This is typically used to synchronise a parser’s notion of lexeme IDs with the lexers. While doing this, it keeps track of which lexemes:

  1. are defined in the lexer but not referenced by the parser
  2. and referenced by the parser but not defined in the lexer and returns them as a tuple (Option<HashSet<&str>>, Option<HashSet<&str>>) in the order (defined_in_lexer_missing_from_parser, referenced_in_parser_missing_from_lexer). Since in most cases both sets are expected to be empty, None is returned to avoid a HashSet allocation.

Lexing and parsing can continue if either set is non-empty, so it is up to the caller as to what action they take if either return set is non-empty. A non-empty set #1 is often benign: some lexers deliberately define tokens which are not used (e.g. reserving future keywords). A non-empty set #2 is more likely to be an error since there are parts of the grammar where nothing the user can input will be parseable.

source

fn set_rule_ids_spanned<'a>( &'a mut self, rule_ids_map: &HashMap<&'a str, LexerTypesT::StorageT> ) -> (Option<HashSet<&'a str>>, Option<HashSet<(&'a str, Span)>>)

source

fn iter_rules(&self) -> Iter<'_, Rule<LexerTypesT::StorageT>>

Returns an iterator over all rules in this AST.

source

fn iter_start_states(&self) -> Iter<'_, StartState>

Returns an iterator over all start states in this AST.

Implementors§

source§

impl<LexerTypesT: LexerTypes> LexerDef<LexerTypesT> for LRNonStreamingLexerDef<LexerTypesT>
where usize: AsPrimitive<LexerTypesT::StorageT>, LexerTypesT::StorageT: TryFrom<usize>,