Lexical Analyzer v0.1.0
 
Loading...
Searching...
No Matches
lexer_tokenize.h File Reference

Tokenization core implementation. More...

#include "lexer.h"

Go to the source code of this file.

Functions

size_t tokcnt (const char *const line)
 Counts the number of tokens in a given string (or file content).
 
void toknz_segtoset (tokset_t *const set, const size_t token_index, const char *const line, const size_t start, const size_t end, const size_t line_no, const tokcat_e category, const size_t column)
 Tokenizes a segment of a line and stores the resulting token in the token set.
 
tokset_ttoknz (const char *const line)
 Tokenizes a line (or multiple lines of code) into a set of tokens.
 

Detailed Description

Tokenization core implementation.

Handles the conversion of source code into token streams:

  • Counting tokens in source strings
  • Segmenting code into lexical units
  • Full tokenization pipeline
See also
lexer.h For token type definitions
Token Validation For pattern checking rules