-
Notifications
You must be signed in to change notification settings - Fork 1
Finish the migration to LLMs; removing NLTK, etc. #153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: migrate-from-spaCy-and-nltk
Are you sure you want to change the base?
Finish the migration to LLMs; removing NLTK, etc. #153
Conversation
Harder than anticipated, for a few reasons: * can't get things in order. I guess that's the point of PDFminer, but... * PDF miner doesn't give you AcroForms at all. It has a completley hardcoded way of getting them, outside the context of the page. * We can do these two things: * kinda put all of the fields back in the original text (see replace_original_text). Doesn't work too well though, lots of duplicate pieces of text that put many of the fields in the same place, when they should be in a different place. * could gather fields with the same adjacent text, and get all parts of that text in the PDF. Not guaranteed to be in order tho. * for each field, get all of the surrounding context. Is okay! But consistently gets too much text for GPT4. Even if we make it smaller, sometimes the surrounding context isn't the full sentence, or gets too much from other fields (will have too much shared / confusing the two fields). TBH next goal is to try the PDFPageAndFieldInterpreter approach, notes in there.
…mpletion functions
…nd started promptfoo but still needs work
Going to look at the test failure which is for typing problems. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request completes the migration to LLM-based processing by removing NLTK/scipy dependencies and integrating PDF context for enhanced field renaming. It upgrades all LLM calls to use the ChatCompletion API with gpt-5-nano as the default model, moves prompts to standalone .txt files for easier benchmarking, and adds comprehensive integration tests.
- Removes all NLTK, scikit-learn, networkx, and joblib dependencies
- Implements LLM-powered field renaming with full PDF context analysis
- Migrates all functions to use external prompt files and modern OpenAI API
Reviewed Changes
Copilot reviewed 26 out of 32 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
setup.py | Removes deprecated ML/NLP dependencies (nltk, scikit-learn, networkx, joblib) |
pyproject.toml | Removes mypy overrides for removed dependencies |
promptfoo-real-system.yaml | Adds comprehensive prompt evaluation framework with 10 test PDFs |
promptfoo-quality.yaml | Adds field renaming quality assessment with snake_case validation |
formfyxer/tests/test_passive_voice_detection.py | Updates tests to work with new ChatCompletion message format |
formfyxer/tests/pdf_grouped_dataset.csv | Adds test dataset for PDF field grouping evaluation |
formfyxer/tests/field_renaming_dataset.csv | Adds comprehensive field renaming test dataset (310 entries) |
formfyxer/tests/cluster_test.py | Simplifies tests to avoid hardcoded expectations, focuses on validation |
formfyxer/requirements.txt | Removes obsolete ML dependencies |
formfyxer/prompts/*.txt | Adds 6 external prompt files for modular LLM interactions |
formfyxer/pdf_wrangling.py | Adds PDF text extraction with field markers and improved type safety |
formfyxer/passive_voice_detection.py | Updates to use ChatCompletion API with proper message structure |
formfyxer/lit_explorer.py | Major refactor: removes ML dependencies, adds LLM-based field processing |
formfyxer/integration_tests/*.py | Adds 4 comprehensive integration tests for new LLM functionality |
formfyxer/docx_wrangling.py | Updates to use ChatCompletion API and improves type annotations |
MANIFEST.in | Removes joblib files from distribution |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, mostly nits and questions although I haven't had enough time to verify deeply myself.
# Fallback to traditional approach | ||
length = len(field_names) | ||
last = "null" | ||
new_names = [] | ||
new_names_conf = [] | ||
for i, field_name in enumerate(field_names): | ||
new_name, new_confidence = normalize_name( | ||
jur or "", | ||
cat or "", | ||
i, | ||
i / length, | ||
last, | ||
field_name, | ||
tools_token=tools_token, | ||
) | ||
new_names.append(new_name) | ||
new_names_conf.append(new_confidence) | ||
last = field_name | ||
new_names = [ | ||
v + "__" + str(new_names[:i].count(v) + 1) if new_names.count(v) > 1 else v | ||
for i, v in enumerate(new_names) | ||
] | ||
else: | ||
# Traditional approach when no OpenAI credentials available | ||
length = len(field_names) | ||
last = "null" | ||
new_names = [] | ||
new_names_conf = [] | ||
for i, field_name in enumerate(field_names): | ||
new_name, new_confidence = normalize_name( | ||
jur or "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a big enough chunk that it shouldn't be duplicated in the else directly below it
Co-authored-by: Bryce Willey <[email protected]>
Co-authored-by: Bryce Willey <[email protected]>
Fix #145
This resolves the last bunch of changes mentioned in #145. It's a big one, but it wasn't easy to break into smaller PRs.