Very large data sets often have a ﬂat but regular structure and span multiple disks and
machines. Examples include telephone call records, network logs, and web document reposi-
tories. These large data sets are not amenable to study using traditional database techniques, if
only because they can be too large to ﬁt in a single relational database. On the other hand, many
of the analyses done on them can be expressed using simple, easily distributed computations:
ﬁltering, aggregation, extraction of statistics, and so on.
We present a system for automating such analyses. A ﬁltering phase, in which a query is
expressed using a new procedural programming language, emits data to an aggregation phase.
Both phases are distributed over hundreds or even thousands of computers. The results are then
collated and saved to a ﬁle. The design—including the separation into two phases, the form
of the programming language, and the properties of the aggregators—exploits the parallelism
inherent in having data and computation distributed across many machines.