-How to set up the evaluator:
-----------------------------
+================================================================================
-Edit `config', especially MO_ROOT, EVAL_USER, EVAL_GROUP, TEST_USERS, CT_UID_MIN and CT_UID_MAX.
+ The MO Contest Environment 1.0
-Create $MO_ROOT (here we assume it's /aux/mo)
+ (c) 2001--2005 Martin Mares <mj@ucw.cz>
-Create the evaluation users (outside $CT_UID_MIN .. $CT_UID_MAX):
+================================================================================
- mo-eval:x:65000:65000:MO Evaluator:/aux/mo/eval/mo-eval:/bin/bash
- mo-test1:x:65001:65000:MO Tester 1:/aux/mo/eval/mo-test1:/bin/bash
- mo-test2:x:65002:65000:MO Tester 2:/aux/mo/eval/mo-test2:/bin/bash
-Create the evaluation group:
+This is the programming contest environment used by the Czech Programming Olympiad
+and some other competitions.
- mo-eval:x:65000:
+You will find a brief documentation in the doc/ directory, start with doc/index.html.
-Create the contestant users (inside $CT_UID_MIN .. $CT_UID_MAX; as many as you wish;
-the names don't matter):
+The whole package can be distributed and used under the terms of the GNU General
+Public License version 2.
- mo00:x:65100:65100:MO Contestant 00:/aux/mo/users/mo00/mo00:/bin/bash
- ...
- mo99:x:65199:65199:MO Contestant 99:/aux/mo/users/mo99/mo99:/bin/bash
-
-... and their groups (preferably with the same ID's):
-
- mo00:x:65100:
- ...
- mo99:x:65199:
-
-... and their passwords ...
-
-Run bin/mo-install to create the infrastructure.
-
-Run bin/mo-create-contestants to create contestants' home directories,
-files from template/* will be copied there automatically.
-
-If users are logging in remotely, set quotas and other limits for them.
-Don't use limits.conf for that since with sshd the limits would affect _root_
-(probably bug in interface between sshd and PAM). Just add to /etc/profile:
-
- if [ $UID -ge 65100 -a $UID -le 65199 ] ; then
- ulimit -Sc 8192 # core file size
- ulimit -SHu 32 # processes
- ulimit -SHn 256 # max open files
- ulimit -SHv 262144 # max virtual memory
- fi
-
-
-Various notes:
-~~~~~~~~~~~~~~
-- if you want to assign partial score to testcases, just make the OUTPUT_CHECK
- program generate a file $TDIR/$TEST.pts containing the score.
+If you have any suggestions, bug reports or improvements you would like to share
+with others, please send them to mj@ucw.cz.
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
+
+<html><head>
+<title>MO Eval - Anatomy</title>
+<link rev=made href="mailto:mj@ucw.cz">
+<link rel=stylesheet title=Default href="mo-eval.css" type="text/css" media=all>
+</head><body>
+
+<h1>The Anatomy of MO-Eval</h1>
+
+<p>MO-Eval lives in the following directory structure:
+
+<ul>
+<li><code>bin/</code> – all programs (usually shell scripts)
+<li><code>box/</code> – temporary files used by the sandbox module
+<li><code>doc</code> – this documentation
+<li><code>examples</code> – example problems
+<li><code>misc</code> – various currently undocumented stuff
+<li><code>problems/</code> – definitions of problems (tasks)
+<li><code>public</code> – data available publicly to the contestants
+<li><code>solutions</code> – solutions of problems (both by contestants and authors)
+<li><code>src</code> – source of parts of the evaluator written in C
+<li><code>template</code> – templates of contestants' home directories
+<li><code>testing</code> – results of testing of solutions
+<li><code>tmp</code> – various temporary files
+<li><code>Makefile</code> – as usually
+<li><code>config</code> – the main configuration file
+</ul>
+
+<p>When installed, it will create the following hierarchy:
+
+<ul>
+<li><code>eval</code> – all data belonging to the evaluator, inaccessible to contestants
+<li><code>public</code> – public data made available to the contestants
+<li><code>users</code> – home directories of the contestants
+</ul>
+
+<p>We have tried to make the whole system as flexible as possible, which has lead to writing
+almost everything as simple shell scripts, building on a library of shell functions contained
+in <code>bin/lib</code>. The config files are also shell scripts, making it possible to do lots
+of fancy (and also confusing, to be honest :) ) things there, the most useful being variable
+substitutions.
+
+</body></html>
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
+
+<html><head>
+<title>MO Eval - Evaluation</title>
+<link rev=made href="mailto:mj@ucw.cz">
+<link rel=stylesheet title=Default href="mo-eval.css" type="text/css" media=all>
+</head><body>
+
+<h1>Evaluating solutions</h1>
+
+<p>When the competition is over, copy all solutions submitted by the contestants
+to <code>solutions/</code><i>contestant</i><code>/</code><i>task</i>. If you use
+our submitting system, you can call <code>bin/mo-grab</code> to do this.
+
+<p>Then you can call <code>bin/ev</code> <i>contestant</i> <i>task</i> to evaluate
+a single solution. (In some cases, you can also add the name of the solution as the
+third parameter, which could be useful for comparing different author's solutions.)
+
+<p>You can also use <code>bin/mo-ev-all</code> <i>task names</i> to evaluate
+solutions of the specified tasks by all contestants.
+
+<p>The open data problems are evaluated in a different way, you need to run
+<code>bin/ev-open</code> or <code>bin/mo-ev-open-all</code> instead.
+
+<h2>Results</h2>
+
+<p>For each solution evaluated, <code>bin/ev</code> creates the directory <code>testing/</code><i>contestant</i><code>/</code><i>task</i>
+containing:
+
+<ul>
+<li>a copy of the source code of the solution
+<li>the compiled executable of the solution
+<li><code>log</code> – the log file of compilation
+<li><i>test</i><code>.in</code> – input for the particular test
+<li><i>test</i><code>.out</code> – contestant's output for the particular test
+<li><i>test</i><code>.ok</code> – correct output for the particular test (if given in the problem definition)
+<li><i>test</i><code>.log</code> – detailed log of the particular test, including output of the judge
+<li><code>points</code> – summary of points assigned for the tests. Each line corresponds to a single
+test and it contains three whitespace-separated columns: the name of the test, number of points awarded
+and a log message.
+</ul>
+
+<h2>Sandbox</h2>
+
+FIXME
+
+<h2>Score table</h2>
+
+<p>The <code>bin/mo-score</code> utility can be used to generate a score table
+from all the points files in HTML. The quality of the output is not perfect,
+but it can serve as a basis for further formatting.
+
+</body></html>
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
+
+<html><head>
+<title>The MO Contest Environment</title>
+<link rev=made href="mailto:mj@ucw.cz">
+<link rel=stylesheet title=Default href="mo-eval.css" type="text/css" media=all>
+</head><body>
+
+<h1>The MO Contest Environment</h1>
+
+<p>The MO Contest Environment a.k.a. MO-Eval is a simple system for conducting programming competitions similar to the
+<a href="http://olympiads.win.tue.nl/ioi/">International Olympiad in Informatics</a> – a contest
+where the participants solve programming tasks, which are then evaluated off-line after the end of the
+competition. It's built in a modular way, so extending to other types of programming contests
+(e.g., to on-line contests like the <a href="http://icpc.baylor.edu/icpc/">ACM ICPC</a>) should be
+pretty easy, but it hasn't been done yet.
+
+<p>We use this environment at the <a href="http://mo.mff.cuni.cz/p/index.html.en">Czech Olympiad in programming</a>
+(officially a part of the Mathematical Olympiad) since 2002 and also at the <a href="http://mo.mff.cuni.cz/cpspc/">CPSPC</a>
+(Czech-Polish-Slovak Preparation Camp) when it's held in the Czech Republic.
+
+<h2>Download</h2>
+
+<p>You can download the current release <a href="http://atrey.karlin.mff.cuni.cz/~mj/download/eval/eval-1.0.tar.gz">eval-1.0</a>
+or browse <a href="http://atrey.karlin.mff.cuni.cz/~mj/download/eval/">the archive of past releases</a>.
+
+<h2>Documentation</h2>
+
+<ul>
+<li><a href="anatomy.html">Anatomy of MO-Eval</a>
+<li><a href="install.html">Installation</a>
+<li><a href="tasks.html">Tasks and their types</a>
+<li><a href="eval.html">Evaluating solutions</a>
+<li><a href="public.html">Utilities for contestants</a>
+</ul>
+
+<h2>Portability</h2>
+
+<p>The environment runs under Linux. We currently use a slightly modified installation of <a href="http://www.debian.org/">Debian
+GNU/Linux</a>, but it will happily work with any other Linux distribution with a 2.4 or newer kernel. Everything except the sandbox
+module (which heavily depends on the Linux kernel) should be easily portable to other UNIX systems, although you will probably
+need to install some of the GNU utilities (especially bash) and Perl. Porting to Windows is out of question.
+
+<h2>Author</h2>
+
+<p>MO-Eval has been written by <a href="http://atrey.karlin.mff.cuni.cz/~mj/">Martin Mares</a>.
+Great thanks go to Jan Kara and Milan Straka for their help and for many fine ideas.
+
+<h2>License</h2>
+
+<p>The MO-Eval package can be used and distributed under the terms of the <a href="http://www.gnu.org/copyleft/gpl.html">GNU General
+Public License version 2.</a>
+
+<h2>Feedback</h2>
+
+<p>All bug reports, suggestions and patches are welcome. Please mail them to <a href="mailto:mj@ucw.cz">mj@ucw.cz</a>.
+
+</body></html>
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
+
+<html><head>
+<title>MO Eval - Installation</title>
+<link rev=made href="mailto:mj@ucw.cz">
+<link rel=stylesheet title=Default href="mo-eval.css" type="text/css" media=all>
+</head><body>
+
+<h1>The Installation of MO-Eval</h1>
+
+<p>MO-Eval can be installed in two possible ways:
+
+<h2>Minimal Installation</h2>
+
+<p>Just invoke <code>make</code> and edit the top-level config file to suit
+your needs, leaving <code>TEST_USER</code> commented out.
+
+<p>In this setup, everything lives in the source tree you have started with
+and you don't need any special privileges for neither installation or running
+of the evaluator.
+
+<p><em>Beware: The evaluated programs are <b>NOT</b> fully separated
+from the evaluation environment and they could interfere with it. Use it only for
+playing with the package, not for any real competition.</em>
+
+<h2>Normal Installation</h2>
+
+<p>The recommended way is to let the evaluator use two user accounts. One (let's
+call the user <code>mo-eval</code>) runs the evaluator and keeps all secret
+files like the test data, another one (<code>mo-test</code>) runs the tested
+programs. There can be multiple test users if you want to run several evaluators
+in parallel. However, in practice the evaluation is so fast that this is seldom
+needed.
+
+<p>How to set up this installation:
+
+<ul>
+<li>Run <code>make</code> to compile all programs.
+<li>Edit <code>config</code> to suit your needs, in particular set <code>MO_ROOT</code>,
+<code>EVAL_USER</code>, <code>EVAL_GROUP</code>, <code>TEST_USER</code> and <code>TEST_USERS</code>.
+<li>Create <code>$MO_ROOT</code> (here we will assume that it's set to <code>/aux/mo</code>
+<li>Create the evaluation users:
+<pre>
+ mo-eval:x:65000:65000:MO Evaluator:/aux/mo/eval/mo-eval:/bin/bash
+ mo-test1:x:65001:65000:MO Tester 1:/aux/mo/eval/mo-test1:/bin/bash
+ mo-test2:x:65002:65000:MO Tester 2:/aux/mo/eval/mo-test2:/bin/bash
+</pre>
+<li>And the evaluation group:
+<pre>
+ mo-eval:x:65000:
+</pre>
+<li>Run <code>bin/mo-install</code> as root to create the directory hierarchy under <code>$MO_ROOT</code>
+install all parts of the evaluator there and set the correct access rights.
+<li>Log in as <code>mo-eval</code> and do everything else from there.
+<li>Later, you can reinstall parts of the hierarchy, without affecting the rest, by running:
+ <ul>
+ <li><code>mo-create-public</code> to update the public data available to contestants
+ according to the contents of the <code>public</code> directory
+ <li><code>mo-create-testusers</code> to update the home directory of the <code>mo-test</code> users.
+ </ul>
+
+</ul>
+
+<h2>Contestants' homes</h2>
+
+<p>MO-Eval can either take care of the home directories of contestants or use
+an existing infrastructure. In the former case, you need to do the following:
+
+<ul>
+<li>Set <code>CT_UID_MIN</code> and <code>CT_UID_MAX</code> in the top-level config file.
+(The evaluator users described above should be outside this range!).
+<li>Create the contestant users inside the UID range you defined; choose names as you wish:
+<pre>
+ mo00:x:65100:65100:MO Contestant 00:/aux/mo/users/mo00/mo00:/bin/bash
+ ...
+ mo99:x:65199:65199:MO Contestant 99:/aux/mo/users/mo99/mo99:/bin/bash
+</pre>
+<li>Create a group for each of these users (preferably with the same ID's):
+<pre>
+ mo00:x:65100:
+ ...
+ mo99:x:65199:
+</pre>
+<li>(You can use the <code>bin/mo-create-logins</code> script to automate this
+process, including printing of leaflets with passwords, but you will probably need
+to customize the script.)
+<li>Run <code>bin/mo-create-contestants</code> as root to create the home directories.
+(The permissions are set up so that the contestants cannot see each other's directory
+even if they want. However you still need to make sure that there is no directory
+all of them can write to, like the system-wide <code>/tmp</code>. In our contest,
+the users work on their own machines and only the home directories are shared across
+the network, so this problem doesn't arise.)
+<li>If multiple contestants work on the same machine remotely, you need to set quotas
+and other limits for them. On some systems, you cannot use <code>limits.conf</code>
+for that, because <code>sshd</code> applies the limits as root, so you either
+limit root or the limits don't work at all. In such cases, modify <code>/etc/profile</code>:
+<pre>
+ if [ $UID -ge 65100 -a $UID -le 65199 ] ; then
+ ulimit -Sc 8192 # core file size
+ ulimit -SHu 32 # processes
+ ulimit -SHn 256 # max open files
+ ulimit -SHv 262144 # max virtual memory
+ fi
+</pre>
+</ul>
+
+</body></html>
--- /dev/null
+BODY {
+ background-color: #e2d4a5;
+ color: #202000;
+}
+:link {
+ color: #0030a0;
+ text-decoration: none;
+}
+:visited {
+ color: #631782;
+ text-decoration: none;
+}
+A[href]:hover {
+ background-color: #e2c461;
+}
+
+
+H1 {
+ display: block;
+ font-size: 207.36%;
+ font-weight: bold;
+ margin: 0.67em 0 0.67em;
+ text-align: center;
+}
+
+H2 {
+ display: block;
+ font-size: 144%;
+ font-weight: bold;
+ margin: 1em 0 0.67em;
+ text-align: left;
+}
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
+
+<html><head>
+<title>MO Eval - Utilities for contestants</title>
+<link rev=made href="mailto:mj@ucw.cz">
+<link rel=stylesheet title=Default href="mo-eval.css" type="text/css" media=all>
+</head><body>
+
+<h1>Utilities of contestants</h1>
+
+<p>MO-Eval also offers several utilities for use by the contestants. They are by default
+installed to the <code>public</code> directory, where you also should install a subset
+of the <code>problems</code> hierarchy containing the config files and public test
+cases (e.g., the example input/output from the problem description sheet).
+
+<ul>
+<li><code>compile</code> – compile a solution with the same settings as used
+by the evaluator.
+<li><code>check</code> – compile a solution and check it on the public test cases.
+<li><code>submit</code> – submit a solution for evaluation, after <code>check</code>-ing it.
+Each successfully submitted solution replaces the previous submitted solution for the same task.
+</ul>
+
+<h2>How does it work</h2>
+
+<p>Compilation and checking use the same evaluation mechanism as used later for the
+real evaluation. However, it runs with the privileges of the contestant and is limited
+only to the publicly available test cases. The evaluation log is put to a file called
+<code>log</code> in the current directory, all other files are put to a <code>.box</code>
+subdirectory in the contestant's home.
+
+<p>Submit stores all submitted tasks to a <code>.submit</code> directory in the
+contestant's home. It's probably wise to warn the contestants that they shouldn't
+delete this directory :-)
+
+</body></html>
--- /dev/null
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html40/strict.dtd">
+
+<html><head>
+<title>MO Eval - Tasks</title>
+<link rev=made href="mailto:mj@ucw.cz">
+<link rel=stylesheet title=Default href="mo-eval.css" type="text/css" media=all>
+</head><body>
+
+<h1>Tasks and their types</h1>
+
+<p>MO-Eval supports the following standard types of tasks (new task types can be defined, but
+it takes some effort):
+
+<ul>
+<li><b>off-line tasks</b> – program reads input (from a file or from stdin) and then produces
+an output (to a file or to stdout). Then a <em>checker</em> (a.k.a. judge) program is run, which decides whether the output
+is correct. If there is only one correct output, the checker can be just a call to <code>diff</code>.
+In case of tasks with complicated scoring, the judge can also assign points explicitly. See comments
+in the top-level config file for the exact interface of the judge.
+<li><b>interactive tasks</b> – program needs to react interactively, usually by reading from stdin
+and sending results to stdout (sometimes the communication protocol is abstracted out as a library
+which the tested programs are linked with). In this case, the judge's stdio is cross-connected with the stdio
+of the tested program.
+<li><b>open-data tasks</b> – the contestants don't submit a program, but a set of output files instead.
+Judged similarly to the off-line tasks.
+</ul>
+
+<h2>Setting up a task</h2>
+
+<p>To define a task, you should create a directory <code>problems/</code><i>task_name</i> and populate it
+with the following files:
+
+<ul>
+<li><code>config</code> – local configuration file for the task, which overrides defaults
+set in the top-level config. Task type, list of test cases and many other things can be defined
+here. Consult comments in the top-level config for explanation.
+<li><i>test</i><code>.config</code> – each test case can have its own local overrides
+of configuration, used for example if you want to have some test cases with a different time limit.
+<li><i>test</i><code>.in</code> – input data for the particular test
+<li><i>test</i><code>.out</code> – correct output for the particular test (optional)
+</ul>
+
+</body></html>