Skip to content

Commit 2fcb8f9

Browse files
authored
Merge pull request #47 from Open-ISP/line-investment-by-period
Line investment by period
2 parents f515d89 + bcca4fd commit 2fcb8f9

File tree

61 files changed

+6661
-727
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+6661
-727
lines changed

.gitignore

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -169,10 +169,7 @@ scratch.py
169169
# ispypsa ignores
170170
ispypsa_runs/**/*.csv
171171
ispypsa_runs/**/*.parquet
172-
ispypsa_runs/**/*.hdf5
172+
ispypsa_runs/**/*.h5
173173

174174
# ignore doit database
175175
.doit*
176-
177-
.repomixignore
178-
repomix.config.json

CLAUDE.md

Lines changed: 335 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,335 @@
1+
# Claude Coding Preferences for ISPyPSA
2+
3+
This document captures coding preferences and patterns learned from working on the ISPyPSA project.
4+
5+
## Testing Preferences
6+
7+
### Test Structure
8+
- Prefer comparing results to hardcoded DataFrames rather than using assert statements to check general properties
9+
- Use the `csv_str_to_df` fixture to create test data in a readable format
10+
- Sort DataFrames before comparison to ensure consistent ordering
11+
- Test one thing at a time - implement and run tests individually before moving to the next
12+
13+
### Using csv_str_to_df Fixture
14+
15+
The `csv_str_to_df` fixture is a pytest fixture that converts CSV-formatted strings into pandas DataFrames. This makes tests more readable and maintainable.
16+
17+
#### Basic Usage
18+
```python
19+
def test_my_function(csv_str_to_df):
20+
# Create input data
21+
input_data_csv = """
22+
name, value, active
23+
item1, 100, True
24+
item2, 200, False
25+
"""
26+
input_data = csv_str_to_df(input_data_csv)
27+
28+
# Create expected output
29+
expected_output_csv = """
30+
name, processed_value
31+
item1, 150
32+
item2, 250
33+
"""
34+
expected_output = csv_str_to_df(expected_output_csv)
35+
36+
# Call function and compare
37+
result = my_function(input_data)
38+
pd.testing.assert_frame_equal(result, expected_output)
39+
```
40+
41+
#### Important Notes on CSV Formatting
42+
- **Whitespace**: The fixture handles whitespace around commas, making the CSV more readable
43+
- **Data Types**: The fixture infers data types (integers, floats, booleans, strings)
44+
- **Special Values**: Use `NaN` for missing values, `inf` for infinity
45+
- **Column Alignment**: Align columns for better readability (optional but recommended)
46+
47+
#### Complete Test Example
48+
```python
49+
def test_translate_custom_constraints_with_tables_no_rez_expansion(csv_str_to_df):
50+
"""Test translation of custom constraints when tables are present but REZ transmission expansion is disabled."""
51+
52+
# Input: REZ group constraints RHS
53+
rez_group_constraints_rhs_csv = """
54+
constraint_id, summer_typical
55+
REZ_NSW, 5000
56+
REZ_VIC, 3000
57+
"""
58+
59+
# Input: REZ group constraints LHS
60+
rez_group_constraints_lhs_csv = """
61+
constraint_id, term_type, variable_name, coefficient
62+
REZ_NSW, generator_capacity, GEN1, 1.0
63+
REZ_NSW, generator_capacity, GEN2, 1.0
64+
REZ_VIC, generator_capacity, GEN3, 1.0
65+
"""
66+
67+
# Input: Links DataFrame
68+
links_csv = """
69+
isp_name, name, carrier, bus0, bus1, p_nom, p_nom_extendable
70+
PathA-PathB, PathA-PathB_existing, AC, NodeA, NodeB, 1000, False
71+
"""
72+
73+
# Convert CSV strings to DataFrames
74+
ispypsa_tables = {
75+
"rez_group_constraints_rhs": csv_str_to_df(rez_group_constraints_rhs_csv),
76+
"rez_group_constraints_lhs": csv_str_to_df(rez_group_constraints_lhs_csv),
77+
}
78+
links = csv_str_to_df(links_csv)
79+
80+
# Mock configuration
81+
class MockNetworkConfig:
82+
rez_transmission_expansion = False
83+
84+
class MockConfig:
85+
network = MockNetworkConfig()
86+
87+
config = MockConfig()
88+
89+
# Call the function under test
90+
result = _translate_custom_constraints(config, ispypsa_tables, links)
91+
92+
# Expected RHS result
93+
expected_rhs_csv = """
94+
constraint_name, rhs
95+
REZ_NSW, 5000
96+
REZ_VIC, 3000
97+
"""
98+
expected_rhs = csv_str_to_df(expected_rhs_csv)
99+
100+
# Expected LHS result - note the column order matches the actual output
101+
expected_lhs_csv = """
102+
constraint_name, variable_name, coefficient, component, attribute
103+
REZ_NSW, GEN1, 1.0, Generator, p_nom
104+
REZ_NSW, GEN2, 1.0, Generator, p_nom
105+
REZ_VIC, GEN3, 1.0, Generator, p_nom
106+
"""
107+
expected_lhs = csv_str_to_df(expected_lhs_csv)
108+
109+
# Assert results are as expected
110+
assert "custom_constraints_rhs" in result
111+
assert "custom_constraints_lhs" in result
112+
113+
# Compare DataFrames with sorting to handle row order differences
114+
pd.testing.assert_frame_equal(
115+
result["custom_constraints_rhs"]
116+
.sort_values("constraint_name")
117+
.reset_index(drop=True),
118+
expected_rhs.sort_values("constraint_name").reset_index(drop=True)
119+
)
120+
121+
pd.testing.assert_frame_equal(
122+
result["custom_constraints_lhs"]
123+
.sort_values(["constraint_name", "variable_name"])
124+
.reset_index(drop=True),
125+
expected_lhs.sort_values(["constraint_name", "variable_name"])
126+
.reset_index(drop=True)
127+
)
128+
```
129+
130+
### Test Writing Best Practices
131+
132+
1. **Column Order Matters**: Pay attention to the actual column order in the output. Run the test first to see the actual order, then adjust the expected CSV to match.
133+
134+
2. **Sorting for Comparison**: When row order doesn't matter, sort both DataFrames before comparison:
135+
```python
136+
pd.testing.assert_frame_equal(
137+
actual.sort_values(["col1", "col2"]).reset_index(drop=True),
138+
expected.sort_values(["col1", "col2"]).reset_index(drop=True)
139+
)
140+
```
141+
142+
3. **Handling Special Cases**:
143+
- For DataFrames with NaN values, use `check_dtype=False` if type precision isn't critical
144+
- For floating point comparisons, consider using `check_exact=False` or `rtol=1e-5`
145+
- For columns that are calculated (like capital_cost), exclude them from comparison:
146+
```python
147+
actual_to_compare = actual.drop(columns=["capital_cost"])
148+
```
149+
150+
4. **Empty DataFrame Testing**:
151+
```python
152+
# Test empty input returns empty output
153+
result = my_function(pd.DataFrame())
154+
assert result.empty
155+
```
156+
157+
## Code Organization
158+
159+
### Function Design
160+
- Prefer small, focused functions with single responsibilities
161+
- Extract complex workflows into independent subfunctions that can be tested separately
162+
- Functions should return data (e.g., DataFrames) rather than modifying state
163+
- Each function should handle its own edge cases (None inputs, empty DataFrames)
164+
165+
### Refactoring Patterns
166+
- When a function has multiple independent workflows, break it into:
167+
1. Separate functions for each workflow
168+
2. A main orchestration function that calls the subfunctions
169+
- Move validation logic (like empty checks) into the lowest appropriate level
170+
171+
### Example Refactoring Pattern
172+
```python
173+
# Before: Monolithic function with multiple responsibilities
174+
def complex_function(inputs):
175+
# Workflow 1
176+
# ... lots of code ...
177+
178+
# Workflow 2
179+
# ... lots of code ...
180+
181+
return results
182+
183+
# After: Separated concerns
184+
def _process_workflow_1(inputs):
185+
# Handle edge cases
186+
if inputs is None:
187+
return pd.DataFrame()
188+
# ... focused code ...
189+
return result
190+
191+
def _process_workflow_2(inputs):
192+
# ... focused code ...
193+
return result
194+
195+
def complex_function(inputs):
196+
result1 = _process_workflow_1(inputs)
197+
result2 = _process_workflow_2(inputs)
198+
return combine_results(result1, result2)
199+
```
200+
201+
## Development Workflow
202+
203+
### Version Control
204+
- Be cautious about committing changes - only commit when explicitly requested
205+
- Use descriptive git messages that focus on the "why" rather than the "what"
206+
207+
### Environment Setup
208+
- Use `uv` for Python package management
209+
- Prefer `uv sync` over `uv pip install -e .` when a lock file exists
210+
- Create separate virtual environments for different platforms (e.g., `.venv-wsl` for WSL)
211+
212+
### Using uv for Development
213+
214+
#### Initial Setup (WSL/Linux)
215+
```bash
216+
# Install uv if not already installed
217+
curl -LsSf https://astral.sh/uv/install.sh | sh
218+
219+
# Source the uv environment
220+
source $HOME/.local/bin/env
221+
222+
# Create a virtual environment (use different names for different platforms)
223+
uv venv .venv-wsl # For WSL
224+
# or
225+
uv venv .venv # For native Linux/Mac
226+
227+
# Install dependencies from lock file
228+
uv sync
229+
230+
# Or if you need to specify the venv location
231+
UV_PROJECT_ENVIRONMENT=.venv-wsl uv sync
232+
```
233+
234+
#### Running Tests with uv
235+
```bash
236+
# Basic test execution
237+
source $HOME/.local/bin/env && uv run pytest tests/
238+
239+
# Run a specific test file
240+
source $HOME/.local/bin/env && uv run pytest tests/test_translator/test_translate_custom_constraints.py
241+
242+
# Run a specific test function with verbose output
243+
source $HOME/.local/bin/env && uv run pytest tests/test_translator/test_translate_custom_constraints.py::test_translate_custom_constraints_no_tables_no_links -v
244+
245+
# Run tests matching a pattern
246+
source $HOME/.local/bin/env && uv run pytest tests/ -k "custom_constraint" -v
247+
248+
# With a specific virtual environment
249+
source $HOME/.local/bin/env && UV_PROJECT_ENVIRONMENT=.venv-wsl uv run pytest tests/test_translator/test_translate_custom_constraints.py -v
250+
```
251+
252+
#### Running Python Scripts with uv
253+
```bash
254+
# Run a Python script
255+
source $HOME/.local/bin/env && uv run python example_workflow.py
256+
257+
# Run a module
258+
source $HOME/.local/bin/env && uv run python -m ispypsa.model.build
259+
260+
# Interactive Python shell with project dependencies
261+
source $HOME/.local/bin/env && uv run python
262+
263+
# Run with specific virtual environment
264+
source $HOME/.local/bin/env && UV_PROJECT_ENVIRONMENT=.venv-wsl uv run python example_workflow.py
265+
```
266+
267+
#### Common Workflow Commands
268+
```bash
269+
# Check which packages are installed
270+
source $HOME/.local/bin/env && uv pip list
271+
272+
# Add a new dependency (this updates pyproject.toml and uv.lock)
273+
source $HOME/.local/bin/env && uv add pandas
274+
275+
# Add a development dependency
276+
source $HOME/.local/bin/env && uv add --dev pytest-mock
277+
278+
# Update dependencies
279+
source $HOME/.local/bin/env && uv sync --upgrade
280+
281+
# Run pre-commit hooks
282+
source $HOME/.local/bin/env && uv run pre-commit run --all-files
283+
```
284+
285+
#### Troubleshooting
286+
```bash
287+
# If you get "Project virtual environment directory cannot be used" error
288+
rm -rf .venv
289+
source $HOME/.local/bin/env && uv sync
290+
291+
# To explicitly set UV_LINK_MODE if you see hardlink warnings
292+
export UV_LINK_MODE=copy
293+
source $HOME/.local/bin/env && uv sync
294+
```
295+
296+
#### Best Practices
297+
1. Always source the uv environment before running commands: `source $HOME/.local/bin/env`
298+
2. Use `UV_PROJECT_ENVIRONMENT` when you have multiple virtual environments
299+
3. Run `uv sync` after pulling changes that might have updated dependencies
300+
4. Use `uv run` prefix for all Python-related commands to ensure the correct environment is used
301+
302+
### Testing Workflow
303+
1. Implement the test with hardcoded expected results
304+
2. Run the test to see if it passes
305+
3. Fix any issues (like column ordering) based on actual results
306+
4. Verify the test passes before moving to the next one
307+
308+
## Code Style
309+
310+
### DataFrame Operations
311+
- Be explicit about column ordering in tests
312+
- Use pandas testing utilities for DataFrame comparisons:
313+
```python
314+
pd.testing.assert_frame_equal(
315+
actual.sort_values("key").reset_index(drop=True),
316+
expected.sort_values("key").reset_index(drop=True)
317+
)
318+
```
319+
320+
### Function Naming
321+
- Use descriptive names that indicate the function's purpose
322+
- Private functions should start with underscore
323+
- Use consistent naming patterns (e.g., `_process_*`, `_create_*`, `_translate_*`)
324+
325+
## Communication Preferences
326+
327+
### Progress Updates
328+
- Work on one task at a time and show results before moving to the next
329+
- Explain the reasoning behind refactoring suggestions
330+
- Provide clear summaries of what was accomplished
331+
332+
### Problem Solving
333+
- When tests fail, show the error and fix it step by step
334+
- Consider alternative approaches (like refactoring) to simplify complex testing scenarios
335+
- Ask for clarification when there are multiple possible approaches

0 commit comments

Comments
 (0)