AI Recommender of Recommenders (AI Reco of Recos)#2235
AI Recommender of Recommenders (AI Reco of Recos)#2235samelhousseini wants to merge 1 commit intostagingfrom
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
loomlike
left a comment
There was a problem hiding this comment.
Thank you for your contribution! I've left a few comments—these changes are not mandatory but would be nice to have. Feel free to reply if you have any questions or concerns.
| Below is an at-a-glance map of how deeply the OpenAI's o3 AI reasoning model can help you with every algorithm in the list. | ||
|
|
||
|
|
||
| | Algorithm | Model expertise level\* | |
There was a problem hiding this comment.
How was this table derived?
| from rich.console import Console | ||
| console = Console() | ||
|
|
||
| from tenacity import ( |
There was a problem hiding this comment.
redundant import (line 12)
| if model == "o4-mini": | ||
| return tiktoken.get_encoding("o200k_base") | ||
| else: | ||
| return tiktoken.get_encoding("o200k_base") |
There was a problem hiding this comment.
to handle else cases, I recommend to use encoding_for_model which will raise KeyError for you as well if the model name is not defined in the lookup table: https://github.com/openai/tiktoken/blob/4560a8896f5fb1d35c6f8fd6eee0399f9a1a27ca/tiktoken/model.py#L80
Also, if you want to use else, I suggest using elif for all conditions except the first one to make it logically correct. Alternatively, you can remove else since each if statement already returns.
| # print(">>>>>>>>>>>>>>>>> call_llm model_info", model_info) | ||
|
|
||
| if (model_info.model_name == "gpt-4o") or ((model_info.model_name == "gpt-45")): | ||
| return call_4(messages, model_info.client, model_info.model, temperature) |
There was a problem hiding this comment.
You may simplify this conditional logic by distinguishing between reasoning and non-reasoning models instead of listing all the models individually.
I would recommend using the Responses API instead of the Chat API for this type of task though.
| @retry(wait=wait_random_exponential(min=1, max=30), stop=stop_after_attempt(10)) | ||
| def call_o1(messages, client, model, reasoning_effort ="medium"): | ||
| # print(f"\nCalling OpenAI APIs with {len(messages)} messages - Model: {model} - Endpoint: {client._base_url}\n") | ||
| response = client.chat.completions.create(model=model, messages=messages, reasoning_effort=reasoning_effort) |
There was a problem hiding this comment.
Please consider using Responses API if possible, for both reasoning and non-reasoning models.
| openai_embedding_model_info = { | ||
| "KEY": os.getenv('OPENAI_API_KEY'), | ||
| "MODEL": os.getenv('OPENAI_MODEL_EMBEDDING'), | ||
| "DIMS": 3072 if os.getenv('AZURE_OPENAI_MODEL_EMBEDDING') == "text-embedding-3-large" else 1536 |
There was a problem hiding this comment.
Which models are covered by the else case here? Can we be certain that all their dimensions are 1536?
| prompt_path = os.path.join(prompts_directory, prompt_name) | ||
| if os.path.exists(prompt_path): return prompt_path | ||
|
|
||
| return os.path.join(module_directory, 'prompts', prompt_name) |
There was a problem hiding this comment.
will this filepath always exist? If not, raising ValueError or FileNotFoundError seems the right choice.
| img.save(new_image_path, 'JPEG') | ||
| return new_image_path | ||
| else: | ||
| return None |
There was a problem hiding this comment.
maybe raise ValueError or at least warn users that the file is not png?
|
|
||
|
|
||
|
|
||
| from pathlib import Path |
There was a problem hiding this comment.
could you move import statement to the top of the file?
|
|
||
|
|
||
| import uuid | ||
| import hashlib |
There was a problem hiding this comment.
same here, maybe move all imports to the top of the file
Description
New AI-powered Recommender of Recommenders
References
Using OpenAI Reasoning Models
Checklist:
staging branchAND NOT TOmain branch.