Skip to content

Support ML-Danbooru#6

Closed
picobyte wants to merge 2 commits intopicobyte:masterfrom
CCRcmcpe:mldb
Closed

Support ML-Danbooru#6
picobyte wants to merge 2 commits intopicobyte:masterfrom
CCRcmcpe:mldb

Conversation

@picobyte
Copy link
Owner

see the discussion here

),
'wd14-convnextv2.v1': WaifuDiffusionInterrogator(
'WD14 ConvNeXTV2 v1',
repo_id='SmilingWolf/wd-v1-4-convnextv2-tagger-v2'
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is odd, v1 and v2, was the repository named incorrectly? If so I'm adding a comment

repo_id='SmilingWolf/wd-v1-4-convnextv2-tagger-v2'
),
'wd14-swinv2.v2': WaifuDiffusionInterrogator(
'WD14 SwinV2 v1',
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question here, but then I'd saythe it should be

    'wd14-swinv2.v1': WaifuDiffusionInterrogator(
-----------------^^
        'WD14 SwinV2 v1',
        repo_id='SmilingWolf/wd-v1-4-swinv2-tagger-v2'
    ),

target_size = (
(target_size[0] // 4) * 4,
(target_size[1] // 4) * 4,
)
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the same as

target_size = (target_size[0] & ~3, target_size[1] & ~3)

right?

'onnxruntime-gpu'
)

run_pip(f'install {package}', 'onnxruntime')
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can just add onnxruntime to the requirements.txt, right?

onnxruntime-silicon; sys_platform == 'darwin'
onnxruntime-gpu; sys_platform != 'darwin'

or people can adapt this

def __init__(
self,
name: str,
repo_id: str,
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some without a repo_id, so I've changed this to a repo_id=None, also requires some changes elsewhere,

unload_model_after_running: bool
):
if interrogator not in utils.interrogators:
interrogator: Interrogator = next((i for i in utils.interrogators.values() if interrogator_name == i.name), None)
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a bit silly

', '.join(processed_tags),
ratings,
tags,
dict(list(tags.items())[:200]),
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my branch There is already a solution for this (the slider on in settings -> tagger), so I can omit this.

interrogators[path.name] = DeepDanbooruInterrogator(path.name, path)

return sorted(interrogators.keys())
interrogators["deepdanbooru"] = DeepDanbooruInterrogator(path.name, path)
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Path.name seems to work as well, why change it?

if self.model is not None:
self.model = None
unloaded = True
gc.collect()
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what exactly does gr.collect() do?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may already notice, this line is gc.collect() rather than gr.collect(). I guess PR author's original purpose is to call garbage collector(gc) to run a full collection. But as we already known, he/she did abandon this approach due to the underlying lib bugs.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes I noticed and forgot. I also found this so numba possible sollution for cuda users.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new solution looks cool, please give it a try.

except ImportError:
# only one of these packages should be installed at a time in any one environment
# https://onnxruntime.ai/docs/get-started/with-python.html#install-onnx-runtime
# TODO: remove old package when the environment changes?
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In what scenario do you expect the environment to change? installed on an external drive?
I don't think we have to support this.

repo_id: str,
model_path='model.onnx',
tags_path='selected_tags.csv',
**kwargs
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the kwargs are now obsolete

tags_path = Path(hf_hub_download(
**self.kwargs, filename=self.tags_path))
model_path = hf_hub_download(self.repo_id, self.model_path)
tags_path = hf_hub_download(self.repo_id, self.model_path)
Copy link
Owner Author

@picobyte picobyte Jul 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

 tags_path = hf_hub_download(self.repo_id, self.model_path)
 ^^^^^^^^^--------------------------------------^^^^^^^^^^

This is a bug, I've fixed it.

@picobyte
Copy link
Owner Author

I could not simply merge it anymore so I manually applied the changes and fixed the issues mentioned here. Thanks for the work. I'll mention you in the changelog and commit message. I've posed a lot of questions, sorry about that. Only the ones where I've added eyes are still a question to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants