Lexical Analyzer v0.1.0
 
Loading...
Searching...
No Matches
lexer_tokenize.h
Go to the documentation of this file.
1#ifndef LEXER_TOKENIZE_H
2#define LEXER_TOKENIZE_H
3
4#include "lexer.h"
5
20
21
28
29
50size_t tokcnt(const char *const line);
51
52
73void toknz_segtoset(tokset_t *const set,
74 const size_t token_index,
75 const char *const line,
76 const size_t start,
77 const size_t end,
78 const size_t line_no,
79 const tokcat_e category,
80 const size_t column);
81
109tokset_t *toknz(const char *const line);
110
111 // End of Tokenization group
113
114#endif
tokcat_e
Token category enumeration for categorizing different token types in the pre-processing phase.
Definition lexer.h:48
tokset_t * toknz(const char *const line)
Tokenizes a line (or multiple lines of code) into a set of tokens.
void toknz_segtoset(tokset_t *const set, const size_t token_index, const char *const line, const size_t start, const size_t end, const size_t line_no, const tokcat_e category, const size_t column)
Tokenizes a segment of a line and stores the resulting token in the token set.
size_t tokcnt(const char *const line)
Counts the number of tokens in a given string (or file content).
Lexical analyzer components for token processing.
Container for multiple tokens.
Definition lexer.h:325