From: Martin Mares Date: Sat, 19 Feb 2005 14:14:29 +0000 (+0000) Subject: Added documentation. X-Git-Tag: python-dummy-working~442 X-Git-Url: http://mj.ucw.cz/gitweb/?a=commitdiff_plain;h=0070c7b0968cfcc243038dc8d23e8537fdb16433;p=eval.git Added documentation. --- diff --git a/README b/README index f614d29..5f3126a 100644 --- a/README +++ b/README @@ -1,53 +1,19 @@ -How to set up the evaluator: ----------------------------- +================================================================================ -Edit `config', especially MO_ROOT, EVAL_USER, EVAL_GROUP, TEST_USERS, CT_UID_MIN and CT_UID_MAX. + The MO Contest Environment 1.0 -Create $MO_ROOT (here we assume it's /aux/mo) + (c) 2001--2005 Martin Mares -Create the evaluation users (outside $CT_UID_MIN .. $CT_UID_MAX): +================================================================================ - mo-eval:x:65000:65000:MO Evaluator:/aux/mo/eval/mo-eval:/bin/bash - mo-test1:x:65001:65000:MO Tester 1:/aux/mo/eval/mo-test1:/bin/bash - mo-test2:x:65002:65000:MO Tester 2:/aux/mo/eval/mo-test2:/bin/bash -Create the evaluation group: +This is the programming contest environment used by the Czech Programming Olympiad +and some other competitions. - mo-eval:x:65000: +You will find a brief documentation in the doc/ directory, start with doc/index.html. -Create the contestant users (inside $CT_UID_MIN .. $CT_UID_MAX; as many as you wish; -the names don't matter): +The whole package can be distributed and used under the terms of the GNU General +Public License version 2. - mo00:x:65100:65100:MO Contestant 00:/aux/mo/users/mo00/mo00:/bin/bash - ... - mo99:x:65199:65199:MO Contestant 99:/aux/mo/users/mo99/mo99:/bin/bash - -... and their groups (preferably with the same ID's): - - mo00:x:65100: - ... - mo99:x:65199: - -... and their passwords ... - -Run bin/mo-install to create the infrastructure. - -Run bin/mo-create-contestants to create contestants' home directories, -files from template/* will be copied there automatically. - -If users are logging in remotely, set quotas and other limits for them. -Don't use limits.conf for that since with sshd the limits would affect _root_ -(probably bug in interface between sshd and PAM). Just add to /etc/profile: - - if [ $UID -ge 65100 -a $UID -le 65199 ] ; then - ulimit -Sc 8192 # core file size - ulimit -SHu 32 # processes - ulimit -SHn 256 # max open files - ulimit -SHv 262144 # max virtual memory - fi - - -Various notes: -~~~~~~~~~~~~~~ -- if you want to assign partial score to testcases, just make the OUTPUT_CHECK - program generate a file $TDIR/$TEST.pts containing the score. +If you have any suggestions, bug reports or improvements you would like to share +with others, please send them to mj@ucw.cz. diff --git a/doc/anatomy.html b/doc/anatomy.html new file mode 100644 index 0000000..50c4c38 --- /dev/null +++ b/doc/anatomy.html @@ -0,0 +1,44 @@ + + + +MO Eval - Anatomy + + + + +

The Anatomy of MO-Eval

+ +

MO-Eval lives in the following directory structure: + +

+ +

When installed, it will create the following hierarchy: + +

+ +

We have tried to make the whole system as flexible as possible, which has lead to writing +almost everything as simple shell scripts, building on a library of shell functions contained +in bin/lib. The config files are also shell scripts, making it possible to do lots +of fancy (and also confusing, to be honest :) ) things there, the most useful being variable +substitutions. + + diff --git a/doc/eval.html b/doc/eval.html new file mode 100644 index 0000000..42a5461 --- /dev/null +++ b/doc/eval.html @@ -0,0 +1,53 @@ + + + +MO Eval - Evaluation + + + + +

Evaluating solutions

+ +

When the competition is over, copy all solutions submitted by the contestants +to solutions/contestant/task. If you use +our submitting system, you can call bin/mo-grab to do this. + +

Then you can call bin/ev contestant task to evaluate +a single solution. (In some cases, you can also add the name of the solution as the +third parameter, which could be useful for comparing different author's solutions.) + +

You can also use bin/mo-ev-all task names to evaluate +solutions of the specified tasks by all contestants. + +

The open data problems are evaluated in a different way, you need to run +bin/ev-open or bin/mo-ev-open-all instead. + +

Results

+ +

For each solution evaluated, bin/ev creates the directory testing/contestant/task +containing: + +

+ +

Sandbox

+ +FIXME + +

Score table

+ +

The bin/mo-score utility can be used to generate a score table +from all the points files in HTML. The quality of the output is not perfect, +but it can serve as a basis for further formatting. + + diff --git a/doc/index.html b/doc/index.html new file mode 100644 index 0000000..61a501b --- /dev/null +++ b/doc/index.html @@ -0,0 +1,58 @@ + + + +The MO Contest Environment + + + + +

The MO Contest Environment

+ +

The MO Contest Environment a.k.a. MO-Eval is a simple system for conducting programming competitions similar to the +International Olympiad in Informatics – a contest +where the participants solve programming tasks, which are then evaluated off-line after the end of the +competition. It's built in a modular way, so extending to other types of programming contests +(e.g., to on-line contests like the ACM ICPC) should be +pretty easy, but it hasn't been done yet. + +

We use this environment at the Czech Olympiad in programming +(officially a part of the Mathematical Olympiad) since 2002 and also at the CPSPC +(Czech-Polish-Slovak Preparation Camp) when it's held in the Czech Republic. + +

Download

+ +

You can download the current release eval-1.0 +or browse the archive of past releases. + +

Documentation

+ + + +

Portability

+ +

The environment runs under Linux. We currently use a slightly modified installation of Debian +GNU/Linux, but it will happily work with any other Linux distribution with a 2.4 or newer kernel. Everything except the sandbox +module (which heavily depends on the Linux kernel) should be easily portable to other UNIX systems, although you will probably +need to install some of the GNU utilities (especially bash) and Perl. Porting to Windows is out of question. + +

Author

+ +

MO-Eval has been written by Martin Mares. +Great thanks go to Jan Kara and Milan Straka for their help and for many fine ideas. + +

License

+ +

The MO-Eval package can be used and distributed under the terms of the GNU General +Public License version 2. + +

Feedback

+ +

All bug reports, suggestions and patches are welcome. Please mail them to mj@ucw.cz. + + diff --git a/doc/install.html b/doc/install.html new file mode 100644 index 0000000..ca0357b --- /dev/null +++ b/doc/install.html @@ -0,0 +1,107 @@ + + + +MO Eval - Installation + + + + +

The Installation of MO-Eval

+ +

MO-Eval can be installed in two possible ways: + +

Minimal Installation

+ +

Just invoke make and edit the top-level config file to suit +your needs, leaving TEST_USER commented out. + +

In this setup, everything lives in the source tree you have started with +and you don't need any special privileges for neither installation or running +of the evaluator. + +

Beware: The evaluated programs are NOT fully separated +from the evaluation environment and they could interfere with it. Use it only for +playing with the package, not for any real competition. + +

Normal Installation

+ +

The recommended way is to let the evaluator use two user accounts. One (let's +call the user mo-eval) runs the evaluator and keeps all secret +files like the test data, another one (mo-test) runs the tested +programs. There can be multiple test users if you want to run several evaluators +in parallel. However, in practice the evaluation is so fast that this is seldom +needed. + +

How to set up this installation: + +

+ +

Contestants' homes

+ +

MO-Eval can either take care of the home directories of contestants or use +an existing infrastructure. In the former case, you need to do the following: + +

+ + diff --git a/doc/mo-eval.css b/doc/mo-eval.css new file mode 100644 index 0000000..ba129ac --- /dev/null +++ b/doc/mo-eval.css @@ -0,0 +1,32 @@ +BODY { + background-color: #e2d4a5; + color: #202000; +} +:link { + color: #0030a0; + text-decoration: none; +} +:visited { + color: #631782; + text-decoration: none; +} +A[href]:hover { + background-color: #e2c461; +} + + +H1 { + display: block; + font-size: 207.36%; + font-weight: bold; + margin: 0.67em 0 0.67em; + text-align: center; +} + +H2 { + display: block; + font-size: 144%; + font-weight: bold; + margin: 1em 0 0.67em; + text-align: left; +} diff --git a/doc/public.html b/doc/public.html new file mode 100644 index 0000000..6ac5efd --- /dev/null +++ b/doc/public.html @@ -0,0 +1,36 @@ + + + +MO Eval - Utilities for contestants + + + + +

Utilities of contestants

+ +

MO-Eval also offers several utilities for use by the contestants. They are by default +installed to the public directory, where you also should install a subset +of the problems hierarchy containing the config files and public test +cases (e.g., the example input/output from the problem description sheet). + +

+ +

How does it work

+ +

Compilation and checking use the same evaluation mechanism as used later for the +real evaluation. However, it runs with the privileges of the contestant and is limited +only to the publicly available test cases. The evaluation log is put to a file called +log in the current directory, all other files are put to a .box +subdirectory in the contestant's home. + +

Submit stores all submitted tasks to a .submit directory in the +contestant's home. It's probably wise to warn the contestants that they shouldn't +delete this directory :-) + + diff --git a/doc/tasks.html b/doc/tasks.html new file mode 100644 index 0000000..afddd7c --- /dev/null +++ b/doc/tasks.html @@ -0,0 +1,43 @@ + + + +MO Eval - Tasks + + + + +

Tasks and their types

+ +

MO-Eval supports the following standard types of tasks (new task types can be defined, but +it takes some effort): + +

+ +

Setting up a task

+ +

To define a task, you should create a directory problems/task_name and populate it +with the following files: + +

+ +