This repository was archived by the owner on Mar 18, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 11
Grand Unified Python Benchmarks evaluation #3
Copy link
Copy link
Open
Description
Hello,
This is Catalin from the Server Scripting Languages Optimization Team at Intel Corporation. I tried doing an initial evaluation of FatPython's current performance on the Grand Unified Python Benchmarks (located at https://hg.python.org/benchmarks/). I encountered the following issues:
- I had to patch fatoptimizer because it failed when running any benchmark (I'm attaching the patch, the error was that callsite doesn't have a member called starargs or kwargs in some cases : fatoptimizer_gupb_compat.patch.zip)
- I had to patch perf.py from the benchmarks repository because not all benchmarks will work (currently 37/43, patch is also attached :
gupb_fatpython_compat.patch.zip). - Benchmarks not working:
- 2to3
- chameleon_v2
- tornado_http
- django_v3
- normal_startup
- startup_nosite
Revisions
- GUPB: 9923b81a1d34 + patch
- Baseline CPython 3: 0f46c9a5735f
Hardware and OS Configuration
- Platform: Intel XEON (Haswell-EP) 18 Cores
- BIOS settings:
- Intel Turbo Boost Technology: false
- Hyper-Threading: false
- OS: CentOS 7.1.1503
- OS configuration:
- Address Space Layout Randomization (ASLR) disabled to reduce run to run variation by
echo 0 > /proc/sys/kernel/randomize_va_space - CPU frequency set fixed at 2.3GHz
- Address Space Layout Randomization (ASLR) disabled to reduce run to run variation by
- GCC version: 4.8.3
Running GUBP
The command used for running GUBP was:
python perf.py /path/to/baseline/python /path/to/fatpython/fatpython -b all -r --csv outfile.csv --affinity 6
fatpython is a shell script that contains the following two lines:
#!/bin/bash
/absolute/path/to/fatpython/python -X fat "$@"
Results
| Benchmark | Improvement |
|---|---|
| spectral_norm | 23.57% |
| nqueens | 4.57% |
| unpack_sequence | 2.22% |
| nbody | 2.08% |
| unpickle_list | 2.02% |
| meteor_contest | 1.87% |
| fastpickle | 1.38% |
| regex_effbot | 0.77% |
| chaos | 0.20% |
| json_load | 0.00% |
| etree_iterparse | -0.48% |
| hexiom2 | -0.48% |
| pidigits | -0.78% |
| pickle_dict | -0.80% |
| etree_parse | -1.38% |
| mako_v2 | -1.50% |
| fannkuch | -1.63% |
| fastunpickle | -1.73% |
| etree_generate | -2.15% |
| regex_v8 | -2.15% |
| regex_compile | -2.50% |
| pickle_list | -2.81% |
| richards | -2.90% |
| pathlib | -3.39% |
| etree_process | -3.57% |
| raytrace | -3.77% |
| silent_logging | -4.13% |
| go | -4.22% |
| telco | -4.23% |
| call_simple | -4.49% |
| json_dump_v2 | -4.49% |
| formatted_logging | -4.54% |
| call_method | -4.86% |
| call_method_slots | -5.63% |
| float | -5.92% |
| simple_logging | -6.12% |
| call_method_unknown | -8.34% |
Metadata
Metadata
Assignees
Labels
No labels