Skip to content

Commit 4dccc10

Browse files
Initial README (#22)
* Initial README * program -> solution * Update README.md * Update README.md Co-authored-by: Tomasz Nowak <[email protected]> * adjustments and installation steps for non-linuxes * image size adjustment * Fix merge errors --------- Co-authored-by: Mateusz Masiarz <[email protected]>
1 parent 6781e97 commit 4dccc10

File tree

4 files changed

+87
-32
lines changed

4 files changed

+87
-32
lines changed

README.md

Lines changed: 60 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,62 @@
1-
# sinol-make
2-
CLI tool for creating sio2 task packages. \
3-
Currently in development and not yet ready to be used.
1+
# <img src="https://avatars.githubusercontent.com/u/2264918?s=200&v=4" height=60em> sinol-make
42

5-
## Installing from source
6-
`pip3 install .`
3+
`sinol-make` is a CLI tool for creating and verifying problem packages
4+
for [sio2](https://github.com/sio2project/oioioi)
5+
with features such as:
6+
- measuring time and memory in the same deterministic way as sio2,
7+
- running the solutions in parallel,
8+
- keeping a git-friendly report of solutions' scores,
9+
- catching mistakes in the problem packages as early as possible,
10+
- and more.
11+
12+
# Contents
13+
14+
- [Why?](#why)
15+
- [Installation](#installation)
16+
- [Usage](#usage)
17+
- [Configuarion](#configuration)
18+
- [Reporting bugs and contributing code](#reporting-bugs-and-contributing-code)
19+
20+
### Why?
21+
22+
The purpose of the tool is to make it easier to create good problem packages
23+
for official competitions, which requires collaboration with other people
24+
and using a multitude of "good practices" recommendations.
25+
While there are several excellent CLI tools for creating tests and solutions,
26+
they lack some built-in mechanisms for verifying packages and finding mistakes
27+
before uploading the package to the judge system.
28+
As sinol-make was created specifically for the sio2 problem packages,
29+
by default it downloads and uses sio2's deterministic mechanism of measuring
30+
solutions' runtime, called `oiejq`.
31+
32+
### Installation
33+
34+
It's possible to directly install [sinol-make](https://pypi.org/project/sinol-make/)
35+
through Python's package manager pip, which usually is installed alongside Python:
36+
37+
```
38+
pip3 install sinol-make
39+
```
40+
41+
As `oiejq` works only on Linux-based operating systems,
42+
*we do not recommend* using operating systems such as Windows or macOS.
43+
Nevertheless `sinol-make` supports those operating systems,
44+
though there are additional installation steps required to use
45+
other tools for measuring time (which are non-deterministic and produce reports different from sio2):
46+
- Windows (WSL): `apt install time timeout`
47+
- macOS: `brew install gnu-time coreutils`
48+
49+
### Usage
50+
51+
The availabe commands (see `sinol-make --help`) are:
52+
53+
- `sinol-make run` -- Runs selected solutions (by default all solutions) on selected tests (by default all tests) with a given number
54+
of CPUs. Measures the solutions' time with oiejq, unless specified otherwise. After running the solutions, it
55+
compares the solutions' scores with the ones saved in config.yml.
56+
Run `sinol-make run --help` to see available flags.
57+
58+
### Reporting bugs and contributing code
59+
60+
- Want to report a bug or request a feature? [Open an issue](https://github.com/sio2project/sinol-make/issues).
61+
- Want to help us build `sinol-make`? Create a Pull Request and we will gladly review it.
762

8-
## Running tests
9-
1. Install `sinol-make` with test dependencies: \
10-
```pip3 install .[tests]```
11-
2. Run `pytest` in root directory of this repository.

src/sinol_make/commands/run/__init__.py

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -23,14 +23,18 @@ def get_name(self):
2323
def configure_subparser(self, subparser):
2424
parser = subparser.add_parser(
2525
'run',
26-
help='Run current task',
27-
description='Run current task'
26+
help='Runs solutions in parallel on tests and verifies the expected solutions\' scores with the config.',
27+
description='Runs selected solutions (by default all solutions) \
28+
on selected tests (by default all tests) \
29+
with a given number of cpus. \
30+
Measures the solutions\' time with oiejq, unless specified otherwise. \
31+
After running the solutions, it compares the solutions\' scores with the ones saved in config.yml.'
2832
)
2933

3034
default_timetool = 'oiejq' if sys.platform == 'linux' else 'time'
3135

32-
parser.add_argument('--programs', type=str, nargs='+',
33-
help='programs to be run, for example prog/abc{b,s}*.{cpp,py}')
36+
parser.add_argument('--solutions', type=str, nargs='+',
37+
help='solutions to be run, for example prog/abc{b,s}*.{cpp,py}')
3438
parser.add_argument('--tests', type=str, nargs='+',
3539
help='tests to be run, for example in/abc{0,1}*')
3640
parser.add_argument('--cpus', type=int,
@@ -39,8 +43,8 @@ def configure_subparser(self, subparser):
3943
parser.add_argument('--ml', type=float, help='memory limit (in MB)')
4044
parser.add_argument('--hide_memory', dest='hide_memory', action='store_true',
4145
help='hide memory usage in report')
42-
parser.add_argument('--program_report', type=str,
43-
help='file to store report from program executions (in markdown)')
46+
parser.add_argument('--solutions_report', type=str,
47+
help='file to store report from solution executions (in markdown)')
4448
parser.add_argument('--time_tool', choices=['oiejq', 'time'], default=default_timetool,
4549
help='tool to measure time and memory usage (default when possible: oiejq)')
4650
parser.add_argument('--oiejq_path', type=str,
@@ -480,7 +484,7 @@ def compile_and_run(self, solutions):
480484
for solution in solutions]
481485
compiled_commands = zip(solutions, executables, compilation_results)
482486
names = solutions
483-
return self.run_solutions(compiled_commands, names, solutions, self.args.program_report)
487+
return self.run_solutions(compiled_commands, names, solutions, self.args.solutions_report)
484488

485489

486490
def print_expected_scores(self, expected_scores):
@@ -499,7 +503,7 @@ def validate_expected_scores(self, results):
499503

500504
config_expected_scores = self.config.get("sinol_expected_scores", {})
501505
used_solutions = results.keys()
502-
if self.args.programs == None and config_expected_scores: # If no solutions were specified, use all programs from config
506+
if self.args.solutions == None and config_expected_scores: # If no solutions were specified, use all solutions from config
503507
used_solutions = config_expected_scores.keys()
504508

505509
used_groups = set()
@@ -549,7 +553,7 @@ def validate_expected_scores(self, results):
549553
added_groups.add(group[0])
550554
elif type == "remove":
551555
# We check whether a solution was removed only when sinol_make was run on all of them
552-
if field == '' and self.args.programs == None and config_expected_scores:
556+
if field == '' and self.args.solutions == None and config_expected_scores:
553557
for solution in change:
554558
removed_solutions.add(solution[0])
555559
# We check whether a group was removed only when sinol_make was run on all of them
@@ -753,7 +757,7 @@ def run(self, args):
753757
self.groups = list(sorted(set([self.get_group(test) for test in self.tests])))
754758
self.possible_score = self.get_possible_score(self.groups)
755759

756-
solutions = self.get_solutions(self.args.programs)
760+
solutions = self.get_solutions(self.args.solutions)
757761
results = self.compile_and_run(solutions)
758762
validation_results = self.validate_expected_scores(results)
759763
self.print_expected_scores_diff(validation_results)

tests/commands/run/test_integration.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -124,17 +124,17 @@ def test_flag_tests(create_package, time_tool):
124124
assert command.tests == ["in/abc1a.in"]
125125

126126

127-
def test_flag_programs(capsys, create_package, time_tool):
127+
def test_flag_solutions(capsys, create_package, time_tool):
128128
"""
129-
Test flag --programs.
130-
Checks if correct programs are run (by checking the output).
129+
Test flag --solutions.
130+
Checks if correct solutions are run (by checking the output).
131131
"""
132132
package_path = create_package
133133
command = get_command()
134134
create_ins_outs(package_path, command)
135135

136136
parser = configure_parsers()
137-
args = parser.parse_args(["run", "--programs", "prog/abc1.cpp", "prog/abc2.cpp", "--time_tool", time_tool])
137+
args = parser.parse_args(["run", "--solutions", "prog/abc1.cpp", "prog/abc2.cpp", "--time_tool", time_tool])
138138
command = Command()
139139
command.run(args)
140140

tests/commands/run/test_unit.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ def test_get_executable():
5252
assert command.get_executable("abc.cpp") == "abc.e"
5353

5454

55-
def test_compile_programs(create_package):
55+
def test_compile_solutions(create_package):
5656
package_path = create_package
5757
command = get_command(package_path)
5858
solutions = command.get_solutions(None)
@@ -105,7 +105,7 @@ def test_calculate_points():
105105
def test_run_solutions(create_package, time_tool):
106106
package_path = create_package
107107
command = get_command(package_path)
108-
command.args = argparse.Namespace(program_report=False, time_tool=time_tool)
108+
command.args = argparse.Namespace(solutions_report=False, time_tool=time_tool)
109109
create_ins_outs(package_path, command)
110110
command.tests = command.get_tests(None)
111111
command.groups = list(sorted(set([command.get_group(test) for test in command.tests])))
@@ -157,7 +157,7 @@ def test_validate_expected_scores_success():
157157
command.scores = command.config["scores"]
158158

159159
# Test with correct expected scores.
160-
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
160+
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
161161
results = {
162162
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK"},
163163
}
@@ -166,7 +166,7 @@ def test_validate_expected_scores_success():
166166
assert results.removed_solutions == set()
167167

168168
# Test with incorrect result.
169-
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
169+
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
170170
results = {
171171
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "WA"},
172172
}
@@ -175,7 +175,7 @@ def test_validate_expected_scores_success():
175175
assert len(results.changes) == 1
176176

177177
# Test with removed solution.
178-
command.args = argparse.Namespace(programs=None, tests=None)
178+
command.args = argparse.Namespace(solutions=None, tests=None)
179179
results = {
180180
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK"},
181181
"abc1.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "WA"},
@@ -188,7 +188,7 @@ def test_validate_expected_scores_success():
188188

189189
# Test with added solution and added group.
190190
command.config["scores"][5] = 0
191-
command.args = argparse.Namespace(programs=["prog/abc.cpp", "prog/abc5.cpp"], tests=None)
191+
command.args = argparse.Namespace(solutions=["prog/abc.cpp", "prog/abc5.cpp"], tests=None)
192192
results = {
193193
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK", 5: "WA"},
194194
"abc5.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK", 5: "WA"},
@@ -199,7 +199,7 @@ def test_validate_expected_scores_success():
199199
assert len(results.added_groups) == 1
200200

201201
# Test with removed group.
202-
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
202+
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
203203
results = {
204204
"abc.cpp": {1: "OK", 2: "OK", 3: "OK"},
205205
}
@@ -208,7 +208,7 @@ def test_validate_expected_scores_success():
208208
assert len(results.removed_groups) == 1
209209

210210
# Test with correct expected scores and --tests flag.
211-
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=["in/abc1a.in", "in/abc2a.in"])
211+
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=["in/abc1a.in", "in/abc2a.in"])
212212
results = {
213213
"abc.cpp": {1: "OK", 2: "OK"},
214214
}
@@ -222,7 +222,7 @@ def test_validate_expected_scores_fail(capsys):
222222
command.scores = command.config["scores"]
223223

224224
# Test with missing points for group in config.
225-
command.args = argparse.Namespace(programs=["prog/abc.cpp"], tests=None)
225+
command.args = argparse.Namespace(solutions=["prog/abc.cpp"], tests=None)
226226
results = {
227227
"abc.cpp": {1: "OK", 2: "OK", 3: "OK", 4: "OK", 5: "OK"},
228228
}

0 commit comments

Comments
 (0)