When the competition is over, copy all solutions submitted by the contestants
to solutions/
contestant/
task. If you use
our submitting system, you can call bin/mo-grab
to do this.
Then you can call bin/ev
contestant task to evaluate
a single solution. (In some cases, you can also add the name of the solution as the
third parameter, which could be useful for comparing different author's solutions.)
You can also use bin/mo-ev-all
task names to evaluate
solutions of the specified tasks by all contestants.
The open data problems are evaluated in a different way, you need to run
bin/ev-open
or bin/mo-ev-open-all
instead.
For each solution evaluated, bin/ev
creates the directory testing/
contestant/
task
containing:
log
– the log file of compilation
.in
– input for the particular test
.out
– contestant's output for the particular test
.ok
– correct output for the particular test (if given in the problem definition)
.log
– detailed log of the particular test, including output of the judge
points
– summary of points assigned for the tests. Each line corresponds to a single
test and it contains three whitespace-separated columns: the name of the test, number of points awarded
and a log message.
The solutions under evaluation run in a simple sandbox which restricts time,
memory and system calls available to the program. You can set the sandbox options
in the top-level config file, see bin/box --help
for a list of the
available ones.
The bin/mo-score
utility can be used to generate a score table
from all the points files in HTML. The quality of the output is not perfect,
but it can serve as a basis for further formatting.