Skip to content

Conversation

mandroid6
Copy link
Contributor

Summary:
This diff adds a new argument --use-tma to the operator.py file in the flex_attention directory of the tritonbench repository. This argument allows users to enable Tensor Memory Access (TMA) in kernel options for flex-attention benchmarks.

Changes:

  • Added --use-tma argument to the parse_args function in operator.py
  • Modified the parse_args function to store the --use-tma value in the args object

Differential Revision: D74839480

Summary:
This diff adds a new argument `--use-tma` to the `operator.py` file in the `flex_attention` directory of the `tritonbench` repository. This argument allows users to enable Tensor Memory Access (TMA) in kernel options for flex-attention benchmarks.

**Changes:**

* Added `--use-tma` argument to the `parse_args` function in `operator.py`
* Modified the `parse_args` function to store the `--use-tma` value in the `args` object

Differential Revision: D74839480
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74839480

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants