Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@
"customizations": {
"vscode": {
"extensions": [
"openai.chatgpt",
"GitHub.copilot",
"GitHub.copilot-chat",
"ms-toolsai.jupyter",
Expand Down
39 changes: 39 additions & 0 deletions .github/workflows/release-please-lock.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Sync uv.lock for release PRs

on:
pull_request:
types:
- opened
- synchronize
- ready_for_review
- reopened

jobs:
update-uv-lock:
if: startsWith(github.head_ref, 'release-please--')
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout release PR branch
uses: actions/checkout@v4
with:
ref: ${{ github.head_ref }}
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
- name: Regenerate uv.lock
run: uv lock
- name: Commit updated lockfile
if: ${{ !cancelled() }}
run: |
if git diff --quiet -- uv.lock; then
echo "uv.lock already up to date."
exit 0
fi
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add uv.lock
git commit -m "chore: sync uv.lock after version bump"
git push
20 changes: 20 additions & 0 deletions .github/workflows/release-please.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: Release Please

on:
push:
branches:
- main
workflow_dispatch:

jobs:
release-please:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
steps:
- uses: googleapis/release-please-action@v4
with:
config-file: release-please-config.json
manifest-file: .release-please-manifest.json
8 changes: 5 additions & 3 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
name: Release to PyPi
name: Release to PyPI

on:
release:
types:
- created
types: [published]
push:
tags:
- "v*"
workflow_dispatch:

jobs:
Expand Down
3 changes: 3 additions & 0 deletions .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
".": "1.6.0"
}
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,15 +100,15 @@ __HeartKit__ exposes several open-source datasets for training each of the Heart

__HeartKit__ provides a __model factory__ that allows you to easily create and train customized models. The model factory includes a number of modern networks well suited for efficient, real-time edge applications. Each model architecture exposes a number of high-level parameters that can be used to customize the network for a given application. These parameters can be set as part of the configuration accessible via the CLI and Python package.

- **[TCN](https://ambiqai.github.io/neuralspot-edge/models/tcn)**: A CNN leveraging dilated convolutions (key=`tcn`)
- **[U-Net](https://ambiqai.github.io/neuralspot-edge/models/unet)**: A CNN with encoder-decoder architecture for segmentation tasks (key=`unet`)
- **[U-NeXt](https://ambiqai.github.io/neuralspot-edge/models/unext)**: A U-Net variant leveraging MBConv blocks (key=`unext`)
- **[EfficientNetV2](https://ambiqai.github.io/neuralspot-edge/models/efficientnet)**: A CNN leveraging MBConv blocks (key=`efficientnet`)
- **[MobileOne](https://ambiqai.github.io/neuralspot-edge/models/mobileone)**: A CNN aimed at sub-1ms inference (key=`mobileone`)
- **[ResNet](https://ambiqai.github.io/neuralspot-edge/models/resnet)**: A popular CNN often used for vision tasks (key=`resnet`)
- **[Conformer](https://ambiqai.github.io/neuralspot-edge/models/conformer)**: A transformer composed of both convolutional and self-attention blocks (key=`conformer`)
- **[MetaFormer](https://ambiqai.github.io/neuralspot-edge/models/metaformer)**: A transformer composed of both spatial mixing and channel mixing blocks (key=`metaformer`)
- **[TSMixer](https://ambiqai.github.io/neuralspot-edge/models/tsmixer)**: An All-MLP Architecture for Time Series Classification (key=`tsmixer`)
- **[TCN](https://ambiqai.github.io/helia-edge/api/helia_edge/models/tcn)**: A CNN leveraging dilated convolutions (key=`tcn`)
- **[U-Net](https://ambiqai.github.io/helia-edge/api/helia_edge/models/unet)**: A CNN with encoder-decoder architecture for segmentation tasks (key=`unet`)
- **[U-NeXt](https://ambiqai.github.io/helia-edge/api/helia_edge/models/unext)**: A U-Net variant leveraging MBConv blocks (key=`unext`)
- **[EfficientNetV2](https://ambiqai.github.io/helia-edge/api/helia_edge/models/efficientnet)**: A CNN leveraging MBConv blocks (key=`efficientnet`)
- **[MobileOne](https://ambiqai.github.io/helia-edge/api/helia_edge/models/mobileone)**: A CNN aimed at sub-1ms inference (key=`mobileone`)
- **[ResNet](https://ambiqai.github.io/helia-edge/api/helia_edge/models/resnet)**: A popular CNN often used for vision tasks (key=`resnet`)
- **[Conformer](https://ambiqai.github.io/helia-edge/api/helia_edge/models/conformer)**: A transformer composed of both convolutional and self-attention blocks (key=`conformer`)
- **[MetaFormer](https://ambiqai.github.io/helia-edge/api/helia_edge/models/metaformer)**: A transformer composed of both spatial mixing and channel mixing blocks (key=`metaformer`)
- **[TSMixer](https://ambiqai.github.io/helia-edge/api/helia_edge/models/tsmixer)**: An All-MLP Architecture for Time Series Classification (key=`tsmixer`)
- **[Bring-Your-Own-Model (BYOM)](https://ambiqai.github.io/heartkit/models/byom)**: Register new SoTA model architectures w/ custom configurations

---
Expand Down
Binary file modified docs/assets/heartkit-banner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/css/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ a.internal-link::after {
}

.md-grid {
max-width: 1440px;
max-width: 1840px;
}

/* Give space to lower icons so Gitter chat doesn't get on top of them */
Expand Down
4 changes: 2 additions & 2 deletions docs/datasets/icentia11k.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ More info available on [PhysioNet website](https://physionet.org/content/icentia

```py linenums="1"
from pathlib import Path
import neuralspot_edge as nse
import helia_edge as helia
import heartkit as hk

ds = hk.DatasetFactory.get('icentia11k')(
Expand All @@ -24,7 +24,7 @@ More info available on [PhysioNet website](https://physionet.org/content/icentia

# Create signal generator
data_gen = self.ds.signal_generator(
patient_generator=nse.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
patient_generator=helia.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
frame_size=256,
samples_per_patient=5,
target_rate=100,
Expand Down
4 changes: 2 additions & 2 deletions docs/datasets/lsad.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Please visit [Physionet](https://physionet.org/content/ecg-arrhythmia/1.0.0/) fo

```py linenums="1"
from pathlib import Path
import neuralspot_edge as nse
import helia_edge as helia
import heartkit as hk

ds = hk.DatasetFactory.get('lsad')(
Expand All @@ -24,7 +24,7 @@ Please visit [Physionet](https://physionet.org/content/ecg-arrhythmia/1.0.0/) fo

# Create signal generator
data_gen = self.ds.signal_generator(
patient_generator=nse.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
patient_generator=helia.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
frame_size=256,
samples_per_patient=5,
target_rate=100,
Expand Down
4 changes: 2 additions & 2 deletions docs/datasets/ludb.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Please visit [Physionet](https://physionet.org/content/ludb/1.0.1/) for more det

```py linenums="1"
from pathlib import Path
import neuralspot_edge as nse
import helia_edge as helia
import heartkit as hk

ds = hk.DatasetFactory.get('ludb')(
Expand All @@ -24,7 +24,7 @@ Please visit [Physionet](https://physionet.org/content/ludb/1.0.1/) for more det

# Create signal generator
data_gen = self.ds.signal_generator(
patient_generator=nse.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
patient_generator=helia.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
frame_size=256,
samples_per_patient=5,
target_rate=100,
Expand Down
4 changes: 2 additions & 2 deletions docs/datasets/ptbxl.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Please visit [Physionet](https://physionet.org/content/ptb-xl/1.0.3/) for more d

```py linenums="1"
from pathlib import Path
import neuralspot_edge as nse
import helia_edge as helia
import heartkit as hk

ds = hk.DatasetFactory.get('ptbxl')(
Expand All @@ -24,7 +24,7 @@ Please visit [Physionet](https://physionet.org/content/ptb-xl/1.0.3/) for more d

# Create signal generator
data_gen = self.ds.signal_generator(
patient_generator=nse.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
patient_generator=helia.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
frame_size=256,
samples_per_patient=5,
target_rate=100,
Expand Down
4 changes: 2 additions & 2 deletions docs/datasets/qtdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Please visit [Physionet](https://doi.org/10.13026/C24K53) for more details.

```py linenums="1"
from pathlib import Path
import neuralspot_edge as nse
import helia_edge as helia
import heartkit as hk

ds = hk.DatasetFactory.get('qtdb')(
Expand All @@ -22,7 +22,7 @@ Please visit [Physionet](https://doi.org/10.13026/C24K53) for more details.

# Create signal generator
data_gen = self.ds.signal_generator(
patient_generator=nse.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
patient_generator=helia.utils.uniform_id_generator(ds.patient_ids, repeat=True, shuffle=True),
frame_size=256,
samples_per_patient=5,
target_rate=100,
Expand Down
28 changes: 14 additions & 14 deletions docs/guides/byot.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
"import tensorflow as tf\n",
"import numpy as np\n",
"import numpy.typing as npt\n",
"import neuralspot_edge as nse\n",
"import helia_edge as helia\n",
"import matplotlib.pyplot as plt"
]
},
Expand All @@ -90,7 +90,7 @@
"os.environ[\"HK_DATASET_PATH\"] = os.getenv(\"HK_DATASET_PATH\", \"./datasets\")\n",
"\n",
"plot_theme = hk.utils.dark_theme\n",
"nse.utils.silence_tensorflow()\n",
"helia.utils.silence_tensorflow()\n",
"_ = hk.utils.setup_plotting(plot_theme)"
]
},
Expand Down Expand Up @@ -163,7 +163,7 @@
" ) -> Generator[tuple[npt.NDArray, npt.NDArray], None, None]:\n",
" if isinstance(samples_per_patient, Iterable):\n",
" samples_per_patient = samples_per_patient[0]\n",
" for pt_id in nse.utils.uniform_id_generator(patient_ids, shuffle=shuffle):\n",
" for pt_id in helia.utils.uniform_id_generator(patient_ids, shuffle=shuffle):\n",
" for x, y in self.patient_data_generator(pt_id, samples_per_patient):\n",
" yield x, y\n",
" # END FOR\n",
Expand Down Expand Up @@ -242,7 +242,7 @@
"metadata": {},
"outputs": [],
"source": [
"DataloaderFactory = nse.utils.create_factory(factory=\"BYOT.DataloaderFactory\", type=hk.HKDataloader)\n",
"DataloaderFactory = helia.utils.create_factory(factory=\"BYOT.DataloaderFactory\", type=hk.HKDataloader)\n",
"DataloaderFactory.register(\"ptbxl\", PtbxlDataloader)"
]
},
Expand Down Expand Up @@ -326,14 +326,14 @@
"\n",
"def load_train_datasets(\n",
" datasets: list[hk.HKDataset],\n",
" dataloaderFactory: nse.utils.ItemFactory[hk.HKDataloader],\n",
" dataloaderFactory: helia.utils.ItemFactory[hk.HKDataloader],\n",
" params: hk.HKTaskParams,\n",
") -> tuple[tf.data.Dataset, tf.data.Dataset]:\n",
" \"\"\"Loads training and validation datasets.\n",
"\n",
" Args:\n",
" datasets(list[hk.HKDataset]): List of datasets to load.\n",
" dataloaderFactory(nse.utils.ItemFactory[hk.HKDataloader]): Factory to create dataloaders.\n",
" dataloaderFactory(helia.utils.ItemFactory[hk.HKDataloader]): Factory to create dataloaders.\n",
" params(hk.HKTaskParams): Task parameters.\n",
"\n",
" Returns:\n",
Expand Down Expand Up @@ -392,10 +392,10 @@
" \"\"\"\n",
" os.makedirs(params.job_dir, exist_ok=True)\n",
"\n",
" logger = nse.utils.setup_logger(__name__, level=params.verbose, file_path=params.job_dir / \"train.log\")\n",
" logger = helia.utils.setup_logger(__name__, level=params.verbose, file_path=params.job_dir / \"train.log\")\n",
" logger.debug(f\"Creating working directory in {params.job_dir}\")\n",
"\n",
" params.seed = nse.utils.set_random_seed(params.seed)\n",
" params.seed = helia.utils.set_random_seed(params.seed)\n",
" logger.debug(f\"Random seed {params.seed}\")\n",
"\n",
" with open(params.job_dir / \"train_config.json\", \"w\", encoding=\"utf-8\") as fp:\n",
Expand All @@ -416,7 +416,7 @@
" # Load existing model\n",
" if params.resume and params.model_file:\n",
" logger.debug(f\"Loading model from file {params.model_file}\")\n",
" model = nse.models.load_model(params.model_file)\n",
" model = helia.models.load_model(params.model_file)\n",
" params.model_file = None\n",
" else:\n",
" logger.debug(\"Creating model from scratch\")\n",
Expand All @@ -429,7 +429,7 @@
" )\n",
" # END IF\n",
"\n",
" flops = nse.metrics.flops.get_flops(model, batch_size=1, fpath=params.job_dir / \"model_flops.log\")\n",
" flops = helia.metrics.flops.get_flops(model, batch_size=1, fpath=params.job_dir / \"model_flops.log\")\n",
"\n",
" t_mul = 1\n",
" first_steps = (params.steps_per_epoch * params.epochs) / (np.power(params.lr_cycles, t_mul) - t_mul + 1)\n",
Expand Down Expand Up @@ -482,7 +482,7 @@
" )\n",
" logger.debug(f\"Model saved to {params.model_file}\")\n",
"\n",
" nse.plotting.plot_history_metrics(\n",
" helia.plotting.plot_history_metrics(\n",
" history.history,\n",
" metrics=[\"loss\", metrics[0].name],\n",
" save_path=params.job_dir / \"history.png\",\n",
Expand All @@ -509,10 +509,10 @@
" params (HKTaskParams): Evaluation parameters\n",
" \"\"\"\n",
" os.makedirs(params.job_dir, exist_ok=True)\n",
" logger = nse.utils.setup_logger(__name__, level=params.verbose, file_path=params.job_dir / \"test.log\")\n",
" logger = helia.utils.setup_logger(__name__, level=params.verbose, file_path=params.job_dir / \"test.log\")\n",
" logger.debug(f\"Creating working directory in {params.job_dir}\")\n",
"\n",
" params.seed = nse.utils.set_random_seed(params.seed)\n",
" params.seed = helia.utils.set_random_seed(params.seed)\n",
" logger.debug(f\"Random seed {params.seed}\")\n",
"\n",
" datasets = [hk.DatasetFactory.get(ds.name)(**ds.params) for ds in params.datasets]\n",
Expand All @@ -522,7 +522,7 @@
" test_y = np.concatenate([y for _, y in test_ds.as_numpy_iterator()])\n",
"\n",
" logger.debug(\"Loading model\")\n",
" model = nse.models.load_model(params.model_file)\n",
" model = helia.models.load_model(params.model_file)\n",
"\n",
" logger.debug(\"Performing inference\")\n",
" rst = model.evaluate(test_ds, verbose=params.verbose, return_dict=True)\n",
Expand Down
Loading