1#ifndef LEXER_TOKENIZE_H
2#define LEXER_TOKENIZE_H
50size_t tokcnt(
const char *
const line);
74 const size_t token_index,
75 const char *
const line,
tokcat_e
Token category enumeration for categorizing different token types in the pre-processing phase.
Definition lexer.h:48
tokset_t * toknz(const char *const line)
Tokenizes a line (or multiple lines of code) into a set of tokens.
void toknz_segtoset(tokset_t *const set, const size_t token_index, const char *const line, const size_t start, const size_t end, const size_t line_no, const tokcat_e category, const size_t column)
Tokenizes a segment of a line and stores the resulting token in the token set.
size_t tokcnt(const char *const line)
Counts the number of tokens in a given string (or file content).
Lexical analyzer components for token processing.
Container for multiple tokens.
Definition lexer.h:325