-
Notifications
You must be signed in to change notification settings - Fork 669
Remove Whitespace Tokens from Parser #2077
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Remove Whitespace Tokens from Parser #2077
Conversation
|
The newly added dependency |
|
I have decided to replace the |
| pub enum Whitespace { | ||
| Space, | ||
| Newline, | ||
| Tab, | ||
| SingleLineComment { comment: String, prefix: String }, | ||
| MultiLineComment(String), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While it would make sense for the parser to not have to deal with whitespace including comments, it will be useful to preserve comment tokens. This is really useful when trying to provide IDE completions, where you don't want anything to be suggested in inside a comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Viicos, I have nothing against comments - in fact, a PhD student in my team is working on the #2069 PR for handling them in a more structured manner in the AST. They just should not exist in the form of whitespace. Could you kindly expand on your IDE completions note? I am not sure I understood it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They just should not exist in the form of whitespace.
Agree, maybe they could be made their own token.
Could you kindly expand on your IDE completions note? I am not sure I understood it.
I'm currently experimenting with the tokenizer and parser to use it in a SQL query input we have for our project at work. From a (line, col) position, I'm writing some logic to provide useful completions at this position (by completions, I mean this).
If you are currently writing a comment, the token immediately before the position is going to be a comment token, and in this case I can short-circuit and returns an empty list of competions.
As an example, this is how the Ruff project (they are working on a Python language server) is doing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually having a comment token kind would defeat the purpose of this PR, because the logic in the parser to skip those comment tokens would be the same.
The Ruff parser solves this by having a TokenSource struct, acting as a bridge between the lexer/tokenizer and parser. It has a couple methods to bump the tokens, ignoring the trivia tokens (in our case, that would only be the comment tokens). Maybe we could take inspiration from this pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somewhat - the idea in the aforementioned PR was to have the concept of leading and interstitial comment, with the formed getting into the ast and the latter being dropped as white space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see; I still think it would be important for any kind of comment to be preserved as tokens (my use case is related to completions, but it can be also useful if you want to implement a formatting mechanism that preserves comments, or if a comment can be used to e.g. specify a linter directive -- presumably would be relevant for interstitial comments as well).
I commented some thoughts on this PR ;)
This PR implements a significant architectural refactoring by moving whitespace filtering from the parser to the tokenizer. Instead of emitting whitespace tokens (spaces, tabs, newlines, comments) and filtering them throughout the parser logic, the tokenizer now consumes whitespace during tokenization and never emits these tokens.
While some duplicated logic still remains in the parser (to be addressed in future PRs), this change eliminates a substantial amount of looping overhead. This PR sets the groundwork for a cleaner streaming version, where the tokens are parsed simultaneously as the statements, with no parser memory and only local context passed between parser function calls.
Fixes #2076
Motivation
As discussed in #2076, whitespace tokens were being filtered at numerous points throughout the parser. This approach had several drawbacks:
The parser had extensive whitespace-handling logic scattered throughout:
Functions with whitespace-skipping loops:
peek_tokens_with_location- loops to skip whitespacepeek_tokens_ref- loops to skip whitespacepeek_nth_token_ref- loops to skip whitespaceadvance_token- loops to skip whitespaceprev_token- loops backward to skip whitespaceSpecial variant functions that are now obsolete:
peek_token_no_skip- removed entirely (no longer needed)peek_nth_token_no_skip- removed entirely (no longer needed)next_token_no_skip- removed entirely (no longer needed)Since SQL is not a whitespace-sensitive language (unlike Python), so it should be safe to remove whitespace tokens entirely after tokenization.
Handling Edge Cases
While SQL is generally not whitespace-sensitive, there are specific edge cases that require careful consideration:
1. PostgreSQL COPY FROM STDIN
The
COPY FROM STDINstatement requires preserving the actual data content, which may include meaningful whitespace and newlines. The data section is treated as raw input that should be parsed according to the specified format (tab-delimited, CSV, etc.).Solution: The tokenizer now properly handles this by consuming the data as a single token. The parser then actually parses the body of the CSV-like string, which was not actually done correctly before this refactoring. I have extended the associated tests appropriately.
2. Hyphenated and path identifiers
The tokenizer now includes enhanced logic for hyphenated identifier parsing with proper validation: