Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -171,4 +171,7 @@ cython_debug/
pdm.toml

#csv
*.csv
*.csv

#mesh temporary director
emt_tmp/
1 change: 1 addition & 0 deletions EMT_data_analysis/analysis_scripts/Analysis_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ def load_io_data(df):
]]

df_io = io.load_inside_outside_classification()
df_io = df_io[df_io['Z']<27]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to run this script and got an error on this line?

Traceback (most recent call last):
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/pandas/core/indexes/base.py", line 3805, in get_loc
    return self._engine.get_loc(casted_key)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "index.pyx", line 167, in pandas._libs.index.IndexEngine.get_loc
  File "index.pyx", line 196, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi", line 7081, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi", line 7089, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Z'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/analysis_scripts/Analysis_tools.py", line 1558, in <module>
    run_all_analyses()
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/analysis_scripts/Analysis_tools.py", line 41, in run_all_analyses
    plot_inside_outside_migration_timing(df, FIGS_DIR, OUT_TYPE)
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/analysis_scripts/Analysis_tools.py", line 952, in plot_inside_outside_migration_timing
    dfio_merge = load_io_data(df)
                 ^^^^^^^^^^^^^^^^
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/analysis_scripts/Analysis_tools.py", line 91, in load_io_data
    df_io = df_io[df_io['Z']<27]
                  ~~~~~^^^^^
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/pandas/core/frame.py", line 4102, in __getitem__
    indexer = self.columns.get_loc(key)
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/pandas/core/indexes/base.py", line 3812, in get_loc
    raise KeyError(key) from err
KeyError: 'Z'

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because the current manifest on quilt doesn't have the coordinates of nuclei centroids. For testing I temporarily changed the manifest to point to the copy currently on VAST but it needs to be uploaded
/allen/aics/users/filip.sluzewski/Public_Repos/emt-data-analysis/resubmission_scripts/nuclei_localization/mesh_features-resegmentation.csv


dfio_merged=pd.merge(df_io, df_info, on='Data ID', suffixes=['','_remove'])
remove = [col for col in dfio_merged.columns if 'remove' in col]
Expand Down
27 changes: 13 additions & 14 deletions EMT_data_analysis/analysis_scripts/Nuclei_localization.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
import pyvista as pv
import trimesh
import point_cloud_utils as pcu
import pymeshfix as mf

from bioio import BioImage

Expand All @@ -25,7 +24,7 @@

def nuclei_localization(
df:pd.DataFrame,
movie_id:str,
data_id:str,
output_directory:str,
align_segmentation:bool=True,
):
Expand All @@ -36,8 +35,8 @@ def nuclei_localization(
----------
manifest_path: str
Path to the csv manifest of the full dataset
movie_id: str
Movie ID from manifest for data to process
data_id: str
Data ID from manifest for data to process
output_directory: str
Path to the output directory where the localized nuclei data will be saved.
align_segmentation: bool
Expand All @@ -57,7 +56,7 @@ def nuclei_localization(
elif df['Gene'].values[0] == 'EOMES|TBR2':
seg_path = df['EOMES Nuclear Segmentation URL'].values[0]
else:
raise ValueError(f"The move {movie_id} does not have EOMES or H2B segmentations")
raise ValueError(f"The move {data_id} does not have EOMES or H2B segmentations")

# import pdb; pdb.set_trace()
segmentations = BioImage(df['CollagenIV Segmentation Probability URL'].values[0])
Expand All @@ -77,7 +76,7 @@ def nuclei_localization(
# localize nuclei for each timepoint
num_timepoints = int(df['Image Size T'].values[0])
nuclei = []
for timepoint in tqdm(range(num_timepoints), desc=f"Movie {movie_id}"):
for timepoint in tqdm(range(num_timepoints), desc=f"Movie {data_id}"):
# check if mesh exists for this timepoint
if f'{timepoint}' not in meshes.keys():
print(f"Mesh for timepoint {timepoint} not found.")
Expand All @@ -87,7 +86,7 @@ def nuclei_localization(
break

if align_segmentation:
alignment_matrix = alignment.parse_rotation_matrix_from_string(df['Camera Alignment Matrix'].values[0])
alignment_matrix = alignment.parse_rotation_matrix_from_string(df['Dual Camera Alignment Matrix Value'].values[0])
else:
alignment_matrix = np.zeros((3,3))

Expand All @@ -99,7 +98,7 @@ def nuclei_localization(
alignment_matrix=alignment_matrix
)

nuclei_tp['Movie ID'] = movie_id
nuclei_tp['Data ID'] = data_id
nuclei_tp['Time hr'] = timepoint / 0.5
nuclei.append(nuclei_tp)

Expand All @@ -110,7 +109,7 @@ def nuclei_localization(
newcols.extend(cols[:-2])
nuclei = nuclei[newcols]

out_fn = out_dir / (movie_id + "_localized_nuclei.csv")
out_fn = out_dir / (data_id + "_localized_nuclei.csv")
nuclei.to_csv(out_fn, index=False)
rmtree(tmp_dir)

Expand Down Expand Up @@ -230,8 +229,8 @@ def run_nuclei_localization(
----------
manifest_path: str
Path to the csv manifest of the full dataset
movie_id: str
Movie ID from manifest for data to process
data_id: str
Data ID from manifest for data to process
output_directory: str
Path to the output directory where the localized nuclei data will be saved.
align_segmentation: bool
Expand All @@ -244,13 +243,13 @@ def run_nuclei_localization(

print(f"Processing {len(df_cond)} movies with CollagenIV segmentations.")

for movie_id in tqdm(pd.unique(df_cond['Movie ID']), desc="Movies"):
df_id = df_manifest[df_manifest['Movie ID'] == movie_id]
for data_id in tqdm(pd.unique(df_cond['Data ID']), desc="Movies"):
df_id = df_manifest[df_manifest['Data ID'] == data_id]

# make sure the movie has the required segmentations
nuclei_localization(
df=df_id,
movie_id=movie_id,
data_id=data_id,
output_directory=output_directory,
align_segmentation=align_segmentation
)
Expand Down
54 changes: 23 additions & 31 deletions EMT_data_analysis/figure_generation/colony_mask.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,37 +14,37 @@
from skimage.morphology import remove_small_objects
import argparse
from typing import List

from EMT_data_analysis.tools import io, const

def main(
dataset_manifest_path: str,
colony_feature_manifest_path: str,
movie_id: str,
data_id: str,
out_dir: str,
):
'''
This function creates a visualization of the colony mask in 3D for 0, 16, 32, and 48 hours.

Parameters
----------
dataset_manifest_path: str
Path to the csv manifest containing summary data of the entire dataset
colony_feature_manifest_path: str
Path to the csv manifest containing results from brightfield colony mask feature extraction.
movie_id: str
Movie Unique ID of the movie.
data_id: str
Data ID of the movie.
out_dir: str
Path to the output directory where the visualization will be saved.
'''

if out_dir is None:
out_dir = io.setup_base_directory_name("figures/3D Renders")
else:
out_dir = Path(out_dir)
out_dir.mkdir(exist_ok=True, parents=True)

# get bottom z layer
df_feature = pd.read_csv(colony_feature_manifest_path)
zbottom = df_feature.loc[df['Movie Unique ID'] == movie_id, 'z_bottom'].values[0]
df_feature = io.load_image_analysis_extracted_features()
zbottom = int(df_feature.loc[df_feature['Data ID'] == data_id, 'Bottom Z plane'].values[0])

# get segmentation and base filename
df_manifest = pd.read_csv(dataset_manifest_path)
seg_fn = df_manifest.loc[df_manifest['Movie Unique ID'] == movie_id, 'All Cells Mask File Download'].values[0]
seg = BioIo(seg_fn)
df_manifest = io.load_imaging_and_segmentation_dataset()
seg_fn = df_manifest.loc[df_manifest['Data ID'] == data_id, 'All Cells Mask File Download'].values[0]
seg_file = BioImage(seg_fn)
outname = Path(seg_fn).stem + '_figure'

# lighting setup
Expand All @@ -65,6 +65,7 @@ def main(
)

# process frames for 0, 16, 32, and 48 hours
pv.start_xvfb()
pl = pv.Plotter(off_screen=True, notebook=False, window_size=(1088, 1088))
for tp in tqdm([0, 32, 64, 96]):
# clear scene
Expand Down Expand Up @@ -231,30 +232,21 @@ def cgal_vertices_faces_triangle_mesh(Q: Polyhedron_3):
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Generate figures for colony mask segmentation.')


parser.add_argument(
'--manifest_path',
type=str,
required=True,
help='Path to the csv manifest containing summary data of the entire dataset.'
)
parser.add_argument(
'--feature_path',
type=str,
required=True,
help='Path to the csv manifest containing results from brightfield colony mask feature extraction.'
)
parser.add_argument(
'--movie_id',
'--data_id',
type=str,
required=True,
help='Movie Unique ID of the movie.'
)
parser.add_argument(
'--output_directory',
type=str,
required=True,
help='Path to the output directory where the visualization will be saved.'
)

args = parser.parse_args()
main(args.manifest_path, args.feature_path, args.movie_id, args.output_directory)
if args.data_id is None:
for data_id in const.EXAMPLE_ACM_IDS:
main(data_id, args.output_directory)
else:
main(args.data_id, args.output_directory)
Original file line number Diff line number Diff line change
Expand Up @@ -11,38 +11,43 @@
import pandas as pd
import argparse
import quilt3 as q3
from typing import Optional

from EMT_data_analysis.tools import alignment, io
from EMT_data_analysis.tools import alignment, io, const


def main(
data_id: str,
output: str
data_id: Optional[str]=None,
output: Optional[str]=None
):
'''
Generate three figures for the inside-outside classification of nuclei
at 0, 16, and 32 hours.
Parameters
----------
mesh_fn: str
Path to the .vtm file for the whole colony timelapse.
mid: str
data_id: str
Data ID of the movie.
data_csv: str
Path to the CSV file containing the inside-outside classification data.
output: str
Path to the output directory where the figures will be saved.
'''
# ensure output directory exists
output = Path(output)
output.mkdir(exist_ok=True, parents=True)

if data_id is None:
data_id = const.EXAMPLE_IO_ID

if output is None:
output = io.setup_base_directory_name("figures/Inside-Outside/mesh-figures")
else:
output = Path(output)
output.mkdir(exist_ok=True, parents=True)

# load data
df_meta = io.load_imaging_and_segmentation_dataset()
df_meta = df_meta[df_meta['Data ID'] == data_id]
df = io.load_inside_outside_classification()
df = df[df['Data ID'] == data_id]
df = df[df['Z']<27]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I try to run this script it hits an error here.

$ pdm run EMT_data_analysis/figure_generation/inside-outside_classification.py 
Total number of movies in the dataset: 3491
Traceback (most recent call last):
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/pandas/core/indexes/base.py", line 3805, in get_loc
    return self._engine.get_loc(casted_key)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "index.pyx", line 167, in pandas._libs.index.IndexEngine.get_loc
  File "index.pyx", line 196, in pandas._libs.index.IndexEngine.get_loc
  File "pandas/_libs/hashtable_class_helper.pxi", line 7081, in pandas._libs.hashtable.PyObjectHashTable.get_item
  File "pandas/_libs/hashtable_class_helper.pxi", line 7089, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Z'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/figure_generation/inside-outside_classification.py", line 165, in <module>
    main(args.data_id, args.output)
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/figure_generation/inside-outside_classification.py", line 50, in main
    df = df[df['Z']<27]
            ~~^^^^^
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/pandas/core/frame.py", line 4102, in __getitem__
    indexer = self.columns.get_loc(key)
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/pandas/core/indexes/base.py", line 3812, in get_loc
    raise KeyError(key) from err
KeyError: 'Z'

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the previous similar error, the quilt manifest needs to be updated to include this column


tmp_dir = Path("./emt_tmp/nuclei_localization/")
tmp_dir.mkdir(exist_ok=True, parents=True)
Expand Down Expand Up @@ -146,14 +151,12 @@ def create_nucleus_mesh(df_nucleus: pd.DataFrame):
parser = argparse.ArgumentParser(description='Generate figures for inside-outside classification of nuclei.')
parser.add_argument(
'--data_id',
type=str,
default='3500005828_45',
help='FMS ID of the movie.'
type=str,
help='Data ID of the movie.'
)
parser.add_argument(
'--output',
type=str,
required=True,
help='Path to the output directory where the figures will be saved.'
)

Expand Down
8 changes: 7 additions & 1 deletion EMT_data_analysis/tools/const.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,4 +42,10 @@
'3500005834_55']

# Nuclues Fraction Inside/Outside Example
EXAMPLE_IO_ID = '3500005828_45'
EXAMPLE_IO_ID = '3500005828_45'

# All Cells Mask Examples
EXAMPLE_ACM_IDS = [
'3500005824_36',
'3500006256_12'
]
30 changes: 29 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,35 @@ This will generate CSV for individual nuclei classified as inside the basement m

Run: `python Analysis_tools.py`

This will generate the plots in the manuscript and store them in `results/figures` folder. The manifests used as inputs in this workflow are automatically downloaded from [AWS](https://open.quiltdata.com/b/allencell/tree/aics/emt_timelapse_dataset/manifests/) by default. The user can opt to also use local version of these manifests if they produced locally by running the scripts `Feature_extraction.py`, `Metric_computation.py` and `Nuclei_localization.py`. To use local version of the manifests, please set `load_from_aws=False` everywhere in the script `Analysis_plots.py`.
This will generate the plots in the manuscript and store them in `results/figures` folder. The manifests used as inputs in this workflow are automatically downloaded from [AWS](https://open.quiltdata.com/b/allencell/tree/aics/emt_timelapse_dataset/manifests/) by default.

## 5 - [Optional] 3D Example Rendering
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## 5 - [Optional] 3D Example Rendering
## 5 - 3D Example Rendering

I was under the impression that all the steps are optional? Do the other steps depend on the results of previous steps?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically no, as all of the code pulls from the quilt dataset, but if the user were to process their own data each step would be dependent on the previous

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. If you want to make those relationships between steps explicit, I'd recommend writing more of an introduction at the top of the README.

Our goal for reproducibility for this repo is just that people can run our code on our data and produce the figures in the paper: if we want to try to support users running on their own data, there's a lot more work we have to do.


The functions in `EMT_data_analysis/figure_generation` can be used to generate 3D renderings shown in the paper. Functions have only been tested on Ubuntu 18.04/22.04
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The functions in `EMT_data_analysis/figure_generation` can be used to generate 3D renderings shown in the paper. Functions have only been tested on Ubuntu 18.04/22.04
The functions in `EMT_data_analysis/figure_generation` can be used to generate 3D renderings shown in the paper.

At the top of the readme we already said our code was tested on 18.04. If some of the code doesn't work on 18.04 and needs 22.04, that's a different thing. The top of the readme specifies 18.04.2 though, and our machines have upgraded to 18.04.4 and 18.04.6, so we should update that to be accurate to how we are testing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code will work on both 18.04 and 22.04 since the resubmission data was processed on the A100 machines. I don't think we need to be so specific as listing a specific sub-version of Ubuntu though

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless we test all the code in this repo on 22.04, I think it's easier for users to understand "all the EMT_data_analysis code was tested on 18.04" than "all the EMT_data_analysis code was tested on 18.04, and also some of the code was tested on 22.04, too."


On Ubuntu or Debian:
```bash
sudo apt-get install xvfb libgl1-mesa-glx
```
On Windows:
Comment out any instance of `pv.start_xvfb()` in the code before running.

### All Cells Mask
Run
```bash
python colony_mask.py --data_id [Optional] --output_directory [Optional]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
python colony_mask.py --data_id [Optional] --output_directory [Optional]
python EMT_data_analysis/figure_generation/colony_mask.py --data_id [Optional] --output_directory [Optional]

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

users should be in EMT_data_analysis/figure_generation/ already

Copy link
Collaborator

@pgarrison pgarrison Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

users should be in EMT_data_analysis/figure_generation/ already

In that case, the instructions should specify that. However, I think it's simpler (fewer steps for the user) if we make all the instructions work from the top level of the repo. (This is also an issue with the instructions for the previous steps; I can make a PR for that.)

```
If no input arguments are provided, the code will default to the data shown in the paper and output results to `EMT_data_analysis/results/3D_all_cells_mask`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If no input arguments are provided, the code will default to the data shown in the paper and output results to `EMT_data_analysis/results/3D_all_cells_mask`.
If no input arguments are provided (i.e., `python EMT_data_analysis/figure_generation/colony_mask.py`), the code will default to the data shown in the paper and output results to `EMT_data_analysis/results/3D_all_cells_mask`.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input argument means --data_id or --output_directory

Copy link
Collaborator

@pgarrison pgarrison Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my suggestion was to provide clarity for users who might not understand that. For example, I could imagine someone leaving out the "[Optional]" pieces and running python colony_mask.py --data_id --output_directory.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran colony_mask.py but did not get a 3D_all_cells_mask

$ pdm run EMT_data_analysis/figure_generation/colony_mask.py                                  
/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/tools/io.py:20: DtypeWarning: Columns (0,2,4,6,7,13,18,19,20,21,22,24,25,26,28,32,40,41,46,47,48,54,55,58,68,71,72,79,80,83,85,86,88,90,93) have mixed types. Specify dtype option on import or set low_memory=False.
  df = pd.read_csv(path)
Total number of movies in the dataset: 3491
/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/bioio_ome_zarr/reader.py:87: UserWarning: Warning: reading from S3 without fs_kwargs. Consider providing fs_kwargs (e.g., {'anon': True} for public S3) to ensure accurate reading.
  warnings.warn(
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [06:56<00:00, 104.02s/it]
/home/philip.garrison/workspace/aics/EMT_data_analysis/EMT_data_analysis/tools/io.py:20: DtypeWarning: Columns (0,2,4,6,7,13,18,19,20,21,22,24,25,26,28,32,40,41,46,47,48,54,55,58,68,71,72,79,80,83,85,86,88,90,93) have mixed types. Specify dtype option on import or set low_memory=False.
  df = pd.read_csv(path)
Total number of movies in the dataset: 3491
/home/philip.garrison/workspace/aics/EMT_data_analysis/.venv/lib/python3.11/site-packages/bioio_ome_zarr/reader.py:87: UserWarning: Warning: reading from S3 without fs_kwargs. Consider providing fs_kwargs (e.g., {'anon': True} for public S3) to ensure accurate reading.
  warnings.warn(
100%|████████████████████████████████████████████████████████████████| 4/4 [06:58<00:00, 104.65s/it]
$ ls EMT_data_analysis/results/                                   
feature_extraction  figures  metric_computation  nuclei_localization

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to correct the code or readme. Right now they would be saved in figures/3D Renders

Data ID values are only valid inputs if they have a none-empty value for `All Cells Mask File Download` in the `image_and_segmentation_data.csv` manifest on [AWS](https://open.quiltdata.com/b/allencell/tree/aics/emt_timelapse_dataset/manifests/)

### Inside-Outside Classification
Run
```bash
python inside-outside_classification.py --data_id [Optional] --output_directory [Optional]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
python inside-outside_classification.py --data_id [Optional] --output_directory [Optional]
python EMT_data_analysis/figure_generation/inside-outside_classification.py --data_id [Optional] --output_directory [Optional]

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

users should be in EMT_data_analysis/figure_generation/ already

```
If no input arguments are provided, the code will default to the data shown in the paper and output results to `EMT_data_analysis/results/Inside-Outside/mesh-figures`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If no input arguments are provided, the code will default to the data shown in the paper and output results to `EMT_data_analysis/results/Inside-Outside/mesh-figures`.
If no input arguments are provided (i.e., `python EMT_data_analysis/figure_generation/inside-outside_classification.py`), the code will default to the data shown in the paper and output results to `EMT_data_analysis/results/Inside-Outside/mesh-figures`.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input argument means --data_id or --output_directory

Data ID values are only valid inputs if they have a none-empty value for `CollagenIV Segmentation Mesh Folder` in the `image_and_segmentation_data.csv` manifest on [AWS](https://open.quiltdata.com/b/allencell/tree/aics/emt_timelapse_dataset/manifests/)


# Contact
If you have questions about this code, please reach out to us at [email protected].
Expand Down
Loading