The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.
NAME
    Parallel::Loops - Execute loops using parallel forked subprocesses

SYNOPSIS
        use Parallel::Loops;

        my $maxProcs = 5;
        my $pl = Parallel::Loops->new($maxProcs);

        my @parameters = ( 0 .. 9 );

        # We want to perform some hefty calculation for each @input and
        # store each calculation's result in %output. For that reason, we
        # "tie" %output, so that changes to %output in any child process
        # (see below) are automatically transfered and updated in the
        # parent also.

        my %returnValues;
        $pl->share( \%returnValues );

        $pl->foreach( \@parameters, sub {
            # This sub "magically" executed in parallel forked child
            # processes

            # Lets just create a simple example, but this could be a
            # massive calculation that will be parallelized, so that
            # $maxProcs different processes are calculating sqrt
            # simultaneously for different values of $_ on different CPUs
            # (Do see 'Performance' / 'Properties of the loop body' below)

            $returnValues{$_} = sqrt($_);
        });
        foreach (@parameters) {
            printf "i: %d sqrt(i): %f\n", $_, $returnValues{$_};
        }

    You can also use @arrays instead of %hashes, and/or while loops instead
    of foreach:

        my @returnValues;
        $pl->share(\@returnValues);

        my $i = 0;
        $pl->while ( sub { $i++ < 10 }, sub {
            # This sub "magically" executed in parallel forked
            # child processes

            push @returnValues, [ $i, sqrt($i) ];
        });

    And you can have both foreach and while return values so that
    $pl->share() isn't required at all:

        my $maxProcs = 5;
        my $pl = Parallel::Loops->new($maxProcs);
        my %returnValues = $pl->foreach( [ 0..9 ], sub {
            # Again, this is executed in a forked child
            $_ => sqrt($_);
        });

DESCRIPTION
    Often a loop performs calculations where each iteration of the loop does
    not depend on the previous iteration, and the iterations really could be
    carried out in any order.

    This module allows you to run such loops in parallel using all the CPUs
    at your disposal.

    Return values are automatically transfered from children to parents via
    %hashes or @arrays, that have explicitly been configured for that sort
    of sharing via $pl->share(). Hashes will transfer keys that are set in
    children (but not cleared or unset), and elements that are pushed to
    @arrays in children are pushed to the parent @array too (but note that
    the order is not guaranteed to be the same as it would have been if done
    all in one process, since there is no way of knowing which child would
    finish first!)

    If you can see past the slightly awkward syntax, you're basically
    getting foreach and while loops that can run in parallel without having
    to bother with fork, pipes, signals etc. This is all handled for you by
    this module.

  foreach loop
        $pl->foreach($arrayRef, $childBodySub)

    Runs $childBodySub->() with $_ set foreach element in @$arrayRef, except
    that $childBodySub is run in a forked child process to obtain
    parallelism. Essentially, this does something conceptually similar to:

        foreach(@$arrayRef) {
            $childBodySub->();
        }

    Any setting of hash keys or pushing to arrays that have been set with
    $pl->share() will automagically appear in the hash or array in the
    parent process.

    If you like loop variables, you can run it like so:

        $pl->foreach( \@input, sub {
                my $i = $_;
                .. bla, bla, bla ... $output{$i} = sqrt($i);
            }
        );

  while loop
      $pl->while($conditionSub, $childBodySub)

    Essentially, this does something conceptually similar to:

      while($conditionSub->()) {
          $childBodySub->();
      }

    except that $childBodySub->() is executed in a forked child process.
    Return values are transfered via share() like in "foreach loop" above.

   while loops must affect condition outside $childBodySub
    Note that incrementing $i in the $childBodySub like in this example will
    not work:

       $pl->while( sub { $i < 5 },
                   sub {
                       $output{$i} = sqrt($i);
                       # Won't work!
                       $i++
                   }
                 );

    Because $childBodySub is executed in a child, and so while $i would be
    incremented in the child, that change would not make it to the parent,
    where $conditionSub is evaluated. The changes that make $conditionSub
    return false eventually *must* take place outside the $childBodySub so
    it is executed in the parent. (Adhering to the parallel principle that
    one iteration may not affect any other iterations - including whether to
    run them or not)

  share
      $pl->share(\%output, \@output, ...)

    Each of the arguments to share() are instrumented, so that when a hash
    key is set or array element pushed in a child, this is transfered to the
    parent's hash or array automatically when a child is finished.

    Note the limitation Only keys being set like "$hash{'key'} = 'value'"
    and arrays elements being pushed like "push @array, 'value'" will be
    transfered to the parent. Unsetting keys, or setting particluar array
    elements with $array[3]='value' will be lost if done in the children.
    Also, if two different children set a value for the same key, a random
    one of them will be seen by the parent.

    In the parent process all the %hashes and @arrays are full-fledged, and
    you can use all operations. But only these mentioned operations in the
    child processes make it back to the parent.

   Array element sequence not defined
    Note that when using share() for @returnValue arrays, the sequence of
    elements in @returnValue is not guaranteed to be the same as you'd see
    with a normal sequential while or foreach loop, since the calculations
    are done in parallel and the children may end in an unexpected sequence.
    But if you don't really care about the order of elements in the
    @returnValue array then share-ing an array can be useful and fine.

    If you need to be able to determine which iteration generated what
    output, use a hash instead.

  Recursive forking is possible
    Note that no check is performed for recursive forking: If the main
    process encouters a loop that it executes in parallel, and the execution
    of the loop in child processes also encounters a parallel loop, these
    will also be forked, and you'll essentially have $maxProcs^2 running
    processes. It wouldn't be too hard to implement such a check (either
    inside or outside this package).

Exception/Error Handling / Dying
    If you want some measure of exception handling you can use eval in the
    child like this:

        my %errors;
        $pl->share( \%errors );
        my %returnValues = $pl->foreach( [ 0..9 ], sub {
            # Again, this is executed in a forked child
            eval {
                die "Bogus error"
                    if $_ == 3;
                $_ => sqrt($_);
            };
            if ($@) {
                $errors{$_} = $@;
            }
        });

        # Now test %errors. $errors{3} should exist as the only element

    Also, be sure not to call exit() in the child. That will just exit the
    child and that doesn't work. Right now, exit just makes the parent fail
    no-so-nicely. Patches to this that handle exit somehow are welcome.

Performance
  Properties of the loop body
    Keep in mind that a child process is forked every time while or foreach
    calls the provided sub. For use of Parallel::Loops to make sense, each
    invocation needs to actually do some serious work for the performance
    gain of parallel execution to outweigh the overhead of forking and
    communicating between the processes. So while sqrt() in the example
    above is simple, it will actually be slower than just running it in a
    standard foreach loop because of the overhead.

    Also, if each loop sub returns a massive amount of data, this needs to
    be communicated back to the parent process, and again that could
    outweigh parallel performance gains unless the loop body does some heavy
    work too.

  Linux and Windows Comparison
    On the same VMware host, I ran this script in Debian Linux and Windows
    XP virtual machines respectively. The script runs a "no-op" sub in 1000
    child processes two in parallel at a time

        my $pl = Parallel::Loops->new(2);
        $pl->foreach( [1..1000], sub {} );

    For comparison, that took:

       7.3 seconds on Linux
      43   seconds on Strawberry Perl for Windows
     240   seconds on Cygwin for Windows

  fork() e.g. on Windows
    On some platforms the fork() is emulated. Be sure to read perlfork.

  Temporary files unless select() works - e.g. on Windows
    E.g. on Windows, select is only supported for sockets, and not for
    pipes. So we use temporary files to store the information sent from the
    child to the parent. This adds a little extra overhead. See perlport for
    other platforms where there are problems with select. Parallel::Loops
    tests for a working select() and uses temporary files otherwise.

SEE ALSO
    This module uses fork(). ithreads could have been possible too, but was
    not chosen. You may want to check out:

    When to use forks, when to use threads ...?
    <http://www.perlmonks.org/index.pl?node_id=709061>

    The forks module (not used here)
    <http://search.cpan.org/search?query=forks>

    threads in perlthrtut <http://perldoc.perl.org/perlthrtut.html>

DEPENDENCIES
    I believe this is the only dependency that isn't part of core perl:

        use Parallel::ForkManager;

    These should all be in perl's core:

        use Storable;
        use IO::Handle;
        use Tie::Array;
        use Tie::Hash;

BUGS / ENHANCEMENTS
    No bugs are known at the moment. Send any reports to peter@morch.com.

    Enhancements:

    Optionally prevent recursive forking: If a forked child encounters a
    Parallel::Loop it should be possible to prevent that Parallel::Loop
    instance to also create forks.

    Determine the number of CPUs so that new()'s $maxProcs parameter can be
    optional. Could use e.g. Sys::Sysconf, UNIX::Processors or Sys::CPU.

    Maybe use function prototypes (see Prototypes under perldoc perlsub).

    Then we could do something like

        pl_foreach @input {
            yada($_);
        };
    or

        pl_foreach $pl @input {
            yada($_);
        };

    instead of

        $pl->foreach(\@input, sub {
            yada($_);
        });

    and so on, where the first suggestion above means global variables
    (yikes!). Unfortunately, methods aren't supported by prototypes, so this
    will never be posssible:

        $pl->foreach @input {
            yada($_);
        };

    An alternative pointed out by the perlmonks chatterbox could be to use
    <Devel::Declare> "if I can stand pain".

SOURCE REPOSITORY
    See the git source on github
    <https://github.com/pmorch/perl-Parallel-Loops>

COPYRIGHT
    Copyright (c) 2008 Peter Valdemar Mørch <peter@morch.com>

    All right reserved. This program is free software; you can redistribute
    it and/or modify it under the same terms as Perl itself.

AUTHOR
      Peter Valdemar Mørch <peter@morch.com>