[ Index ]

PHP Cross Reference of Unnamed Project

title

Body

[close]

/se3-unattended/var/se3/unattended/install/linuxaux/opt/perl/lib/5.10.0/pod/ -> perlhack.pod (source)

   1  =head1 NAME
   2  
   3  perlhack - How to hack at the Perl internals
   4  
   5  =head1 DESCRIPTION
   6  
   7  This document attempts to explain how Perl development takes place,
   8  and ends with some suggestions for people wanting to become bona fide
   9  porters.
  10  
  11  The perl5-porters mailing list is where the Perl standard distribution
  12  is maintained and developed.  The list can get anywhere from 10 to 150
  13  messages a day, depending on the heatedness of the debate.  Most days
  14  there are two or three patches, extensions, features, or bugs being
  15  discussed at a time.
  16  
  17  A searchable archive of the list is at either:
  18  
  19      http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/
  20  
  21  or
  22  
  23      http://archive.develooper.com/perl5-porters@perl.org/
  24  
  25  List subscribers (the porters themselves) come in several flavours.
  26  Some are quiet curious lurkers, who rarely pitch in and instead watch
  27  the ongoing development to ensure they're forewarned of new changes or
  28  features in Perl.  Some are representatives of vendors, who are there
  29  to make sure that Perl continues to compile and work on their
  30  platforms.  Some patch any reported bug that they know how to fix,
  31  some are actively patching their pet area (threads, Win32, the regexp
  32  engine), while others seem to do nothing but complain.  In other
  33  words, it's your usual mix of technical people.
  34  
  35  Over this group of porters presides Larry Wall.  He has the final word
  36  in what does and does not change in the Perl language.  Various
  37  releases of Perl are shepherded by a "pumpking", a porter
  38  responsible for gathering patches, deciding on a patch-by-patch,
  39  feature-by-feature basis what will and will not go into the release.
  40  For instance, Gurusamy Sarathy was the pumpking for the 5.6 release of
  41  Perl, and Jarkko Hietaniemi was the pumpking for the 5.8 release, and
  42  Rafael Garcia-Suarez holds the pumpking crown for the 5.10 release.
  43  
  44  In addition, various people are pumpkings for different things.  For
  45  instance, Andy Dougherty and Jarkko Hietaniemi did a grand job as the
  46  I<Configure> pumpkin up till the 5.8 release. For the 5.10 release
  47  H.Merijn Brand took over.
  48  
  49  Larry sees Perl development along the lines of the US government:
  50  there's the Legislature (the porters), the Executive branch (the
  51  pumpkings), and the Supreme Court (Larry).  The legislature can
  52  discuss and submit patches to the executive branch all they like, but
  53  the executive branch is free to veto them.  Rarely, the Supreme Court
  54  will side with the executive branch over the legislature, or the
  55  legislature over the executive branch.  Mostly, however, the
  56  legislature and the executive branch are supposed to get along and
  57  work out their differences without impeachment or court cases.
  58  
  59  You might sometimes see reference to Rule 1 and Rule 2.  Larry's power
  60  as Supreme Court is expressed in The Rules:
  61  
  62  =over 4
  63  
  64  =item 1
  65  
  66  Larry is always by definition right about how Perl should behave.
  67  This means he has final veto power on the core functionality.
  68  
  69  =item 2
  70  
  71  Larry is allowed to change his mind about any matter at a later date,
  72  regardless of whether he previously invoked Rule 1.
  73  
  74  =back
  75  
  76  Got that?  Larry is always right, even when he was wrong.  It's rare
  77  to see either Rule exercised, but they are often alluded to.
  78  
  79  New features and extensions to the language are contentious, because
  80  the criteria used by the pumpkings, Larry, and other porters to decide
  81  which features should be implemented and incorporated are not codified
  82  in a few small design goals as with some other languages.  Instead,
  83  the heuristics are flexible and often difficult to fathom.  Here is
  84  one person's list, roughly in decreasing order of importance, of
  85  heuristics that new features have to be weighed against:
  86  
  87  =over 4
  88  
  89  =item Does concept match the general goals of Perl?
  90  
  91  These haven't been written anywhere in stone, but one approximation
  92  is:
  93  
  94   1. Keep it fast, simple, and useful.
  95   2. Keep features/concepts as orthogonal as possible.
  96   3. No arbitrary limits (platforms, data sizes, cultures).
  97   4. Keep it open and exciting to use/patch/advocate Perl everywhere.
  98   5. Either assimilate new technologies, or build bridges to them.
  99  
 100  =item Where is the implementation?
 101  
 102  All the talk in the world is useless without an implementation.  In
 103  almost every case, the person or people who argue for a new feature
 104  will be expected to be the ones who implement it.  Porters capable
 105  of coding new features have their own agendas, and are not available
 106  to implement your (possibly good) idea.
 107  
 108  =item Backwards compatibility
 109  
 110  It's a cardinal sin to break existing Perl programs.  New warnings are
 111  contentious--some say that a program that emits warnings is not
 112  broken, while others say it is.  Adding keywords has the potential to
 113  break programs, changing the meaning of existing token sequences or
 114  functions might break programs.
 115  
 116  =item Could it be a module instead?
 117  
 118  Perl 5 has extension mechanisms, modules and XS, specifically to avoid
 119  the need to keep changing the Perl interpreter.  You can write modules
 120  that export functions, you can give those functions prototypes so they
 121  can be called like built-in functions, you can even write XS code to
 122  mess with the runtime data structures of the Perl interpreter if you
 123  want to implement really complicated things.  If it can be done in a
 124  module instead of in the core, it's highly unlikely to be added.
 125  
 126  =item Is the feature generic enough?
 127  
 128  Is this something that only the submitter wants added to the language,
 129  or would it be broadly useful?  Sometimes, instead of adding a feature
 130  with a tight focus, the porters might decide to wait until someone
 131  implements the more generalized feature.  For instance, instead of
 132  implementing a "delayed evaluation" feature, the porters are waiting
 133  for a macro system that would permit delayed evaluation and much more.
 134  
 135  =item Does it potentially introduce new bugs?
 136  
 137  Radical rewrites of large chunks of the Perl interpreter have the
 138  potential to introduce new bugs.  The smaller and more localized the
 139  change, the better.
 140  
 141  =item Does it preclude other desirable features?
 142  
 143  A patch is likely to be rejected if it closes off future avenues of
 144  development.  For instance, a patch that placed a true and final
 145  interpretation on prototypes is likely to be rejected because there
 146  are still options for the future of prototypes that haven't been
 147  addressed.
 148  
 149  =item Is the implementation robust?
 150  
 151  Good patches (tight code, complete, correct) stand more chance of
 152  going in.  Sloppy or incorrect patches might be placed on the back
 153  burner until the pumpking has time to fix, or might be discarded
 154  altogether without further notice.
 155  
 156  =item Is the implementation generic enough to be portable?
 157  
 158  The worst patches make use of a system-specific features.  It's highly
 159  unlikely that non-portable additions to the Perl language will be
 160  accepted.
 161  
 162  =item Is the implementation tested?
 163  
 164  Patches which change behaviour (fixing bugs or introducing new features)
 165  must include regression tests to verify that everything works as expected.
 166  Without tests provided by the original author, how can anyone else changing
 167  perl in the future be sure that they haven't unwittingly broken the behaviour
 168  the patch implements? And without tests, how can the patch's author be
 169  confident that his/her hard work put into the patch won't be accidentally
 170  thrown away by someone in the future?
 171  
 172  =item Is there enough documentation?
 173  
 174  Patches without documentation are probably ill-thought out or
 175  incomplete.  Nothing can be added without documentation, so submitting
 176  a patch for the appropriate manpages as well as the source code is
 177  always a good idea.
 178  
 179  =item Is there another way to do it?
 180  
 181  Larry said "Although the Perl Slogan is I<There's More Than One Way
 182  to Do It>, I hesitate to make 10 ways to do something".  This is a
 183  tricky heuristic to navigate, though--one man's essential addition is
 184  another man's pointless cruft.
 185  
 186  =item Does it create too much work?
 187  
 188  Work for the pumpking, work for Perl programmers, work for module
 189  authors, ...  Perl is supposed to be easy.
 190  
 191  =item Patches speak louder than words
 192  
 193  Working code is always preferred to pie-in-the-sky ideas.  A patch to
 194  add a feature stands a much higher chance of making it to the language
 195  than does a random feature request, no matter how fervently argued the
 196  request might be.  This ties into "Will it be useful?", as the fact
 197  that someone took the time to make the patch demonstrates a strong
 198  desire for the feature.
 199  
 200  =back
 201  
 202  If you're on the list, you might hear the word "core" bandied
 203  around.  It refers to the standard distribution.  "Hacking on the
 204  core" means you're changing the C source code to the Perl
 205  interpreter.  "A core module" is one that ships with Perl.
 206  
 207  =head2 Keeping in sync
 208  
 209  The source code to the Perl interpreter, in its different versions, is
 210  kept in a repository managed by a revision control system ( which is
 211  currently the Perforce program, see http://perforce.com/ ).  The
 212  pumpkings and a few others have access to the repository to check in
 213  changes.  Periodically the pumpking for the development version of Perl
 214  will release a new version, so the rest of the porters can see what's
 215  changed.  The current state of the main trunk of repository, and patches
 216  that describe the individual changes that have happened since the last
 217  public release are available at this location:
 218  
 219      http://public.activestate.com/pub/apc/
 220      ftp://public.activestate.com/pub/apc/
 221  
 222  If you're looking for a particular change, or a change that affected
 223  a particular set of files, you may find the B<Perl Repository Browser>
 224  useful:
 225  
 226      http://public.activestate.com/cgi-bin/perlbrowse
 227  
 228  You may also want to subscribe to the perl5-changes mailing list to
 229  receive a copy of each patch that gets submitted to the maintenance
 230  and development "branches" of the perl repository.  See
 231  http://lists.perl.org/ for subscription information.
 232  
 233  If you are a member of the perl5-porters mailing list, it is a good
 234  thing to keep in touch with the most recent changes. If not only to
 235  verify if what you would have posted as a bug report isn't already
 236  solved in the most recent available perl development branch, also
 237  known as perl-current, bleading edge perl, bleedperl or bleadperl.
 238  
 239  Needless to say, the source code in perl-current is usually in a perpetual
 240  state of evolution.  You should expect it to be very buggy.  Do B<not> use
 241  it for any purpose other than testing and development.
 242  
 243  Keeping in sync with the most recent branch can be done in several ways,
 244  but the most convenient and reliable way is using B<rsync>, available at
 245  ftp://rsync.samba.org/pub/rsync/ .  (You can also get the most recent
 246  branch by FTP.)
 247  
 248  If you choose to keep in sync using rsync, there are two approaches
 249  to doing so:
 250  
 251  =over 4
 252  
 253  =item rsync'ing the source tree
 254  
 255  Presuming you are in the directory where your perl source resides
 256  and you have rsync installed and available, you can "upgrade" to
 257  the bleadperl using:
 258  
 259   # rsync -avz rsync://public.activestate.com/perl-current/ .
 260  
 261  This takes care of updating every single item in the source tree to
 262  the latest applied patch level, creating files that are new (to your
 263  distribution) and setting date/time stamps of existing files to
 264  reflect the bleadperl status.
 265  
 266  Note that this will not delete any files that were in '.' before
 267  the rsync. Once you are sure that the rsync is running correctly,
 268  run it with the --delete and the --dry-run options like this:
 269  
 270   # rsync -avz --delete --dry-run rsync://public.activestate.com/perl-current/ .
 271  
 272  This will I<simulate> an rsync run that also deletes files not
 273  present in the bleadperl master copy. Observe the results from
 274  this run closely. If you are sure that the actual run would delete
 275  no files precious to you, you could remove the '--dry-run' option.
 276  
 277  You can than check what patch was the latest that was applied by
 278  looking in the file B<.patch>, which will show the number of the
 279  latest patch.
 280  
 281  If you have more than one machine to keep in sync, and not all of
 282  them have access to the WAN (so you are not able to rsync all the
 283  source trees to the real source), there are some ways to get around
 284  this problem.
 285  
 286  =over 4
 287  
 288  =item Using rsync over the LAN
 289  
 290  Set up a local rsync server which makes the rsynced source tree
 291  available to the LAN and sync the other machines against this
 292  directory.
 293  
 294  From http://rsync.samba.org/README.html :
 295  
 296     "Rsync uses rsh or ssh for communication. It does not need to be
 297      setuid and requires no special privileges for installation.  It
 298      does not require an inetd entry or a daemon.  You must, however,
 299      have a working rsh or ssh system.  Using ssh is recommended for
 300      its security features."
 301  
 302  =item Using pushing over the NFS
 303  
 304  Having the other systems mounted over the NFS, you can take an
 305  active pushing approach by checking the just updated tree against
 306  the other not-yet synced trees. An example would be
 307  
 308    #!/usr/bin/perl -w
 309  
 310    use strict;
 311    use File::Copy;
 312  
 313    my %MF = map {
 314        m/(\S+)/;
 315        $1 => [ (stat $1)[2, 7, 9] ];    # mode, size, mtime
 316        } `cat MANIFEST`;
 317  
 318    my %remote = map { $_ => "/$_/pro/3gl/CPAN/perl-5.7.1" } qw(host1 host2);
 319  
 320    foreach my $host (keys %remote) {
 321        unless (-d $remote{$host}) {
 322        print STDERR "Cannot Xsync for host $host\n";
 323        next;
 324        }
 325        foreach my $file (keys %MF) {
 326        my $rfile = "$remote{$host}/$file";
 327        my ($mode, $size, $mtime) = (stat $rfile)[2, 7, 9];
 328        defined $size or ($mode, $size, $mtime) = (0, 0, 0);
 329        $size == $MF{$file}[1] && $mtime == $MF{$file}[2] and next;
 330        printf "%4s %-34s %8d %9d  %8d %9d\n",
 331            $host, $file, $MF{$file}[1], $MF{$file}[2], $size, $mtime;
 332        unlink $rfile;
 333        copy ($file, $rfile);
 334        utime time, $MF{$file}[2], $rfile;
 335        chmod $MF{$file}[0], $rfile;
 336        }
 337        }
 338  
 339  though this is not perfect. It could be improved with checking
 340  file checksums before updating. Not all NFS systems support
 341  reliable utime support (when used over the NFS).
 342  
 343  =back
 344  
 345  =item rsync'ing the patches
 346  
 347  The source tree is maintained by the pumpking who applies patches to
 348  the files in the tree. These patches are either created by the
 349  pumpking himself using C<diff -c> after updating the file manually or
 350  by applying patches sent in by posters on the perl5-porters list.
 351  These patches are also saved and rsync'able, so you can apply them
 352  yourself to the source files.
 353  
 354  Presuming you are in a directory where your patches reside, you can
 355  get them in sync with
 356  
 357   # rsync -avz rsync://public.activestate.com/perl-current-diffs/ .
 358  
 359  This makes sure the latest available patch is downloaded to your
 360  patch directory.
 361  
 362  It's then up to you to apply these patches, using something like
 363  
 364   # last="`cat ../perl-current/.patch`.gz"
 365   # rsync -avz rsync://public.activestate.com/perl-current-diffs/ .
 366   # find . -name '*.gz' -newer $last -exec gzcat {} \; >blead.patch
 367   # cd ../perl-current
 368   # patch -p1 -N <../perl-current-diffs/blead.patch
 369  
 370  or, since this is only a hint towards how it works, use CPAN-patchaperl
 371  from Andreas König to have better control over the patching process.
 372  
 373  =back
 374  
 375  =head2 Why rsync the source tree
 376  
 377  =over 4
 378  
 379  =item It's easier to rsync the source tree
 380  
 381  Since you don't have to apply the patches yourself, you are sure all
 382  files in the source tree are in the right state.
 383  
 384  =item It's more reliable
 385  
 386  While both the rsync-able source and patch areas are automatically
 387  updated every few minutes, keep in mind that applying patches may
 388  sometimes mean careful hand-holding, especially if your version of
 389  the C<patch> program does not understand how to deal with new files,
 390  files with 8-bit characters, or files without trailing newlines.
 391  
 392  =back
 393  
 394  =head2 Why rsync the patches
 395  
 396  =over 4
 397  
 398  =item It's easier to rsync the patches
 399  
 400  If you have more than one machine that you want to keep in track with
 401  bleadperl, it's easier to rsync the patches only once and then apply
 402  them to all the source trees on the different machines.
 403  
 404  In case you try to keep in pace on 5 different machines, for which
 405  only one of them has access to the WAN, rsync'ing all the source
 406  trees should than be done 5 times over the NFS. Having
 407  rsync'ed the patches only once, I can apply them to all the source
 408  trees automatically. Need you say more ;-)
 409  
 410  =item It's a good reference
 411  
 412  If you do not only like to have the most recent development branch,
 413  but also like to B<fix> bugs, or extend features, you want to dive
 414  into the sources. If you are a seasoned perl core diver, you don't
 415  need no manuals, tips, roadmaps, perlguts.pod or other aids to find
 416  your way around. But if you are a starter, the patches may help you
 417  in finding where you should start and how to change the bits that
 418  bug you.
 419  
 420  The file B<Changes> is updated on occasions the pumpking sees as his
 421  own little sync points. On those occasions, he releases a tar-ball of
 422  the current source tree (i.e. perl@7582.tar.gz), which will be an
 423  excellent point to start with when choosing to use the 'rsync the
 424  patches' scheme. Starting with perl@7582, which means a set of source
 425  files on which the latest applied patch is number 7582, you apply all
 426  succeeding patches available from then on (7583, 7584, ...).
 427  
 428  You can use the patches later as a kind of search archive.
 429  
 430  =over 4
 431  
 432  =item Finding a start point
 433  
 434  If you want to fix/change the behaviour of function/feature Foo, just
 435  scan the patches for patches that mention Foo either in the subject,
 436  the comments, or the body of the fix. A good chance the patch shows
 437  you the files that are affected by that patch which are very likely
 438  to be the starting point of your journey into the guts of perl.
 439  
 440  =item Finding how to fix a bug
 441  
 442  If you've found I<where> the function/feature Foo misbehaves, but you
 443  don't know how to fix it (but you do know the change you want to
 444  make), you can, again, peruse the patches for similar changes and
 445  look how others apply the fix.
 446  
 447  =item Finding the source of misbehaviour
 448  
 449  When you keep in sync with bleadperl, the pumpking would love to
 450  I<see> that the community efforts really work. So after each of his
 451  sync points, you are to 'make test' to check if everything is still
 452  in working order. If it is, you do 'make ok', which will send an OK
 453  report to I<perlbug@perl.org>. (If you do not have access to a mailer
 454  from the system you just finished successfully 'make test', you can
 455  do 'make okfile', which creates the file C<perl.ok>, which you can
 456  than take to your favourite mailer and mail yourself).
 457  
 458  But of course, as always, things will not always lead to a success
 459  path, and one or more test do not pass the 'make test'. Before
 460  sending in a bug report (using 'make nok' or 'make nokfile'), check
 461  the mailing list if someone else has reported the bug already and if
 462  so, confirm it by replying to that message. If not, you might want to
 463  trace the source of that misbehaviour B<before> sending in the bug,
 464  which will help all the other porters in finding the solution.
 465  
 466  Here the saved patches come in very handy. You can check the list of
 467  patches to see which patch changed what file and what change caused
 468  the misbehaviour. If you note that in the bug report, it saves the
 469  one trying to solve it, looking for that point.
 470  
 471  =back
 472  
 473  If searching the patches is too bothersome, you might consider using
 474  perl's bugtron to find more information about discussions and
 475  ramblings on posted bugs.
 476  
 477  If you want to get the best of both worlds, rsync both the source
 478  tree for convenience, reliability and ease and rsync the patches
 479  for reference.
 480  
 481  =back
 482  
 483  =head2 Working with the source
 484  
 485  Because you cannot use the Perforce client, you cannot easily generate
 486  diffs against the repository, nor will merges occur when you update
 487  via rsync.  If you edit a file locally and then rsync against the
 488  latest source, changes made in the remote copy will I<overwrite> your
 489  local versions!
 490  
 491  The best way to deal with this is to maintain a tree of symlinks to
 492  the rsync'd source.  Then, when you want to edit a file, you remove
 493  the symlink, copy the real file into the other tree, and edit it.  You
 494  can then diff your edited file against the original to generate a
 495  patch, and you can safely update the original tree.
 496  
 497  Perl's F<Configure> script can generate this tree of symlinks for you.
 498  The following example assumes that you have used rsync to pull a copy
 499  of the Perl source into the F<perl-rsync> directory.  In the directory
 500  above that one, you can execute the following commands:
 501  
 502    mkdir perl-dev
 503    cd perl-dev
 504    ../perl-rsync/Configure -Dmksymlinks -Dusedevel -D"optimize=-g"
 505  
 506  This will start the Perl configuration process.  After a few prompts,
 507  you should see something like this:
 508  
 509    Symbolic links are supported.
 510  
 511    Checking how to test for symbolic links...
 512    Your builtin 'test -h' may be broken.
 513    Trying external '/usr/bin/test -h'.
 514    You can test for symbolic links with '/usr/bin/test -h'.
 515  
 516    Creating the symbolic links...
 517    (First creating the subdirectories...)
 518    (Then creating the symlinks...)
 519  
 520  The specifics may vary based on your operating system, of course.
 521  After you see this, you can abort the F<Configure> script, and you
 522  will see that the directory you are in has a tree of symlinks to the
 523  F<perl-rsync> directories and files.
 524  
 525  If you plan to do a lot of work with the Perl source, here are some
 526  Bourne shell script functions that can make your life easier:
 527  
 528      function edit {
 529      if [ -L $1 ]; then
 530          mv $1 $1.orig
 531          cp $1.orig $1
 532          vi $1
 533      else
 534          vi $1
 535      fi
 536      }
 537  
 538      function unedit {
 539      if [ -L $1.orig ]; then
 540          rm $1
 541          mv $1.orig $1
 542      fi
 543      }
 544  
 545  Replace "vi" with your favorite flavor of editor.
 546  
 547  Here is another function which will quickly generate a patch for the
 548  files which have been edited in your symlink tree:
 549  
 550      mkpatchorig() {
 551      local diffopts
 552      for f in `find . -name '*.orig' | sed s,^\./,,`
 553      do
 554          case `echo $f | sed 's,.orig$,,;s,.*\.,,'` in
 555          c)   diffopts=-p ;;
 556          pod) diffopts='-F^=' ;;
 557          *)   diffopts= ;;
 558          esac
 559          diff -du $diffopts $f `echo $f | sed 's,.orig$,,'`
 560      done
 561      }
 562  
 563  This function produces patches which include enough context to make
 564  your changes obvious.  This makes it easier for the Perl pumpking(s)
 565  to review them when you send them to the perl5-porters list, and that
 566  means they're more likely to get applied.
 567  
 568  This function assumed a GNU diff, and may require some tweaking for
 569  other diff variants.
 570  
 571  =head2 Perlbug administration
 572  
 573  There is a single remote administrative interface for modifying bug status,
 574  category, open issues etc. using the B<RT> bugtracker system, maintained
 575  by Robert Spier.  Become an administrator, and close any bugs you can get
 576  your sticky mitts on:
 577  
 578      http://bugs.perl.org/
 579  
 580  To email the bug system administrators:
 581  
 582      "perlbug-admin" <perlbug-admin@perl.org>
 583  
 584  =head2 Submitting patches
 585  
 586  Always submit patches to I<perl5-porters@perl.org>.  If you're
 587  patching a core module and there's an author listed, send the author a
 588  copy (see L<Patching a core module>).  This lets other porters review
 589  your patch, which catches a surprising number of errors in patches.
 590  Either use the diff program (available in source code form from
 591  ftp://ftp.gnu.org/pub/gnu/ , or use Johan Vromans' I<makepatch>
 592  (available from I<CPAN/authors/id/JV/>).  Unified diffs are preferred,
 593  but context diffs are accepted.  Do not send RCS-style diffs or diffs
 594  without context lines.  More information is given in the
 595  I<Porting/patching.pod> file in the Perl source distribution.  Please
 596  patch against the latest B<development> version. (e.g., even if you're
 597  fixing a bug in the 5.8 track, patch against the latest B<development>
 598  version rsynced from rsync://public.activestate.com/perl-current/ )
 599  
 600  If changes are accepted, they are applied to the development branch. Then
 601  the 5.8 pumpking decides which of those patches is to be backported to the
 602  maint branch.  Only patches that survive the heat of the development
 603  branch get applied to maintenance versions.
 604  
 605  Your patch should update the documentation and test suite.  See
 606  L<Writing a test>.  If you have added or removed files in the distribution,
 607  edit the MANIFEST file accordingly, sort the MANIFEST file using
 608  C<make manisort>, and include those changes as part of your patch.
 609  
 610  Patching documentation also follows the same order: if accepted, a patch
 611  is first applied to B<development>, and if relevant then it's backported
 612  to B<maintenance>. (With an exception for some patches that document
 613  behaviour that only appears in the maintenance branch, but which has
 614  changed in the development version.)
 615  
 616  To report a bug in Perl, use the program I<perlbug> which comes with
 617  Perl (if you can't get Perl to work, send mail to the address
 618  I<perlbug@perl.org> or I<perlbug@perl.com>).  Reporting bugs through
 619  I<perlbug> feeds into the automated bug-tracking system, access to
 620  which is provided through the web at http://rt.perl.org/rt3/ .  It
 621  often pays to check the archives of the perl5-porters mailing list to
 622  see whether the bug you're reporting has been reported before, and if
 623  so whether it was considered a bug.  See above for the location of
 624  the searchable archives.
 625  
 626  The CPAN testers ( http://testers.cpan.org/ ) are a group of
 627  volunteers who test CPAN modules on a variety of platforms.  Perl
 628  Smokers ( http://www.nntp.perl.org/group/perl.daily-build and
 629  http://www.nntp.perl.org/group/perl.daily-build.reports/ )
 630  automatically test Perl source releases on platforms with various
 631  configurations.  Both efforts welcome volunteers. In order to get
 632  involved in smoke testing of the perl itself visit
 633  L<http://search.cpan.org/dist/Test-Smoke>. In order to start smoke
 634  testing CPAN modules visit L<http://search.cpan.org/dist/CPAN-YACSmoke/>
 635  or L<http://search.cpan.org/dist/POE-Component-CPAN-YACSmoke/> or
 636  L<http://search.cpan.org/dist/CPAN-Reporter/>.
 637  
 638  It's a good idea to read and lurk for a while before chipping in.
 639  That way you'll get to see the dynamic of the conversations, learn the
 640  personalities of the players, and hopefully be better prepared to make
 641  a useful contribution when do you speak up.
 642  
 643  If after all this you still think you want to join the perl5-porters
 644  mailing list, send mail to I<perl5-porters-subscribe@perl.org>.  To
 645  unsubscribe, send mail to I<perl5-porters-unsubscribe@perl.org>.
 646  
 647  To hack on the Perl guts, you'll need to read the following things:
 648  
 649  =over 3
 650  
 651  =item L<perlguts>
 652  
 653  This is of paramount importance, since it's the documentation of what
 654  goes where in the Perl source. Read it over a couple of times and it
 655  might start to make sense - don't worry if it doesn't yet, because the
 656  best way to study it is to read it in conjunction with poking at Perl
 657  source, and we'll do that later on.
 658  
 659  You might also want to look at Gisle Aas's illustrated perlguts -
 660  there's no guarantee that this will be absolutely up-to-date with the
 661  latest documentation in the Perl core, but the fundamentals will be
 662  right. ( http://gisle.aas.no/perl/illguts/ )
 663  
 664  =item L<perlxstut> and L<perlxs>
 665  
 666  A working knowledge of XSUB programming is incredibly useful for core
 667  hacking; XSUBs use techniques drawn from the PP code, the portion of the
 668  guts that actually executes a Perl program. It's a lot gentler to learn
 669  those techniques from simple examples and explanation than from the core
 670  itself.
 671  
 672  =item L<perlapi>
 673  
 674  The documentation for the Perl API explains what some of the internal
 675  functions do, as well as the many macros used in the source.
 676  
 677  =item F<Porting/pumpkin.pod>
 678  
 679  This is a collection of words of wisdom for a Perl porter; some of it is
 680  only useful to the pumpkin holder, but most of it applies to anyone
 681  wanting to go about Perl development.
 682  
 683  =item The perl5-porters FAQ
 684  
 685  This should be available from http://dev.perl.org/perl5/docs/p5p-faq.html .
 686  It contains hints on reading perl5-porters, information on how
 687  perl5-porters works and how Perl development in general works.
 688  
 689  =back
 690  
 691  =head2 Finding Your Way Around
 692  
 693  Perl maintenance can be split into a number of areas, and certain people
 694  (pumpkins) will have responsibility for each area. These areas sometimes
 695  correspond to files or directories in the source kit. Among the areas are:
 696  
 697  =over 3
 698  
 699  =item Core modules
 700  
 701  Modules shipped as part of the Perl core live in the F<lib/> and F<ext/>
 702  subdirectories: F<lib/> is for the pure-Perl modules, and F<ext/>
 703  contains the core XS modules.
 704  
 705  =item Tests
 706  
 707  There are tests for nearly all the modules, built-ins and major bits
 708  of functionality.  Test files all have a .t suffix.  Module tests live
 709  in the F<lib/> and F<ext/> directories next to the module being
 710  tested.  Others live in F<t/>.  See L<Writing a test>
 711  
 712  =item Documentation
 713  
 714  Documentation maintenance includes looking after everything in the
 715  F<pod/> directory, (as well as contributing new documentation) and
 716  the documentation to the modules in core.
 717  
 718  =item Configure
 719  
 720  The configure process is the way we make Perl portable across the
 721  myriad of operating systems it supports. Responsibility for the
 722  configure, build and installation process, as well as the overall
 723  portability of the core code rests with the configure pumpkin - others
 724  help out with individual operating systems.
 725  
 726  The files involved are the operating system directories, (F<win32/>,
 727  F<os2/>, F<vms/> and so on) the shell scripts which generate F<config.h>
 728  and F<Makefile>, as well as the metaconfig files which generate
 729  F<Configure>. (metaconfig isn't included in the core distribution.)
 730  
 731  =item Interpreter
 732  
 733  And of course, there's the core of the Perl interpreter itself. Let's
 734  have a look at that in a little more detail.
 735  
 736  =back
 737  
 738  Before we leave looking at the layout, though, don't forget that
 739  F<MANIFEST> contains not only the file names in the Perl distribution,
 740  but short descriptions of what's in them, too. For an overview of the
 741  important files, try this:
 742  
 743      perl -lne 'print if /^[^\/]+\.[ch]\s+/' MANIFEST
 744  
 745  =head2 Elements of the interpreter
 746  
 747  The work of the interpreter has two main stages: compiling the code
 748  into the internal representation, or bytecode, and then executing it.
 749  L<perlguts/Compiled code> explains exactly how the compilation stage
 750  happens.
 751  
 752  Here is a short breakdown of perl's operation:
 753  
 754  =over 3
 755  
 756  =item Startup
 757  
 758  The action begins in F<perlmain.c>. (or F<miniperlmain.c> for miniperl)
 759  This is very high-level code, enough to fit on a single screen, and it
 760  resembles the code found in L<perlembed>; most of the real action takes
 761  place in F<perl.c>
 762  
 763  First, F<perlmain.c> allocates some memory and constructs a Perl
 764  interpreter:
 765  
 766      1 PERL_SYS_INIT3(&argc,&argv,&env);
 767      2
 768      3 if (!PL_do_undump) {
 769      4     my_perl = perl_alloc();
 770      5     if (!my_perl)
 771      6         exit(1);
 772      7     perl_construct(my_perl);
 773      8     PL_perl_destruct_level = 0;
 774      9 }
 775  
 776  Line 1 is a macro, and its definition is dependent on your operating
 777  system. Line 3 references C<PL_do_undump>, a global variable - all
 778  global variables in Perl start with C<PL_>. This tells you whether the
 779  current running program was created with the C<-u> flag to perl and then
 780  F<undump>, which means it's going to be false in any sane context.
 781  
 782  Line 4 calls a function in F<perl.c> to allocate memory for a Perl
 783  interpreter. It's quite a simple function, and the guts of it looks like
 784  this:
 785  
 786      my_perl = (PerlInterpreter*)PerlMem_malloc(sizeof(PerlInterpreter));
 787  
 788  Here you see an example of Perl's system abstraction, which we'll see
 789  later: C<PerlMem_malloc> is either your system's C<malloc>, or Perl's
 790  own C<malloc> as defined in F<malloc.c> if you selected that option at
 791  configure time.
 792  
 793  Next, in line 7, we construct the interpreter; this sets up all the
 794  special variables that Perl needs, the stacks, and so on.
 795  
 796  Now we pass Perl the command line options, and tell it to go:
 797  
 798      exitstatus = perl_parse(my_perl, xs_init, argc, argv, (char **)NULL);
 799      if (!exitstatus) {
 800          exitstatus = perl_run(my_perl);
 801      }
 802  
 803  
 804  C<perl_parse> is actually a wrapper around C<S_parse_body>, as defined
 805  in F<perl.c>, which processes the command line options, sets up any
 806  statically linked XS modules, opens the program and calls C<yyparse> to
 807  parse it.
 808  
 809  =item Parsing
 810  
 811  The aim of this stage is to take the Perl source, and turn it into an op
 812  tree. We'll see what one of those looks like later. Strictly speaking,
 813  there's three things going on here.
 814  
 815  C<yyparse>, the parser, lives in F<perly.c>, although you're better off
 816  reading the original YACC input in F<perly.y>. (Yes, Virginia, there
 817  B<is> a YACC grammar for Perl!) The job of the parser is to take your
 818  code and "understand" it, splitting it into sentences, deciding which
 819  operands go with which operators and so on.
 820  
 821  The parser is nobly assisted by the lexer, which chunks up your input
 822  into tokens, and decides what type of thing each token is: a variable
 823  name, an operator, a bareword, a subroutine, a core function, and so on.
 824  The main point of entry to the lexer is C<yylex>, and that and its
 825  associated routines can be found in F<toke.c>. Perl isn't much like
 826  other computer languages; it's highly context sensitive at times, it can
 827  be tricky to work out what sort of token something is, or where a token
 828  ends. As such, there's a lot of interplay between the tokeniser and the
 829  parser, which can get pretty frightening if you're not used to it.
 830  
 831  As the parser understands a Perl program, it builds up a tree of
 832  operations for the interpreter to perform during execution. The routines
 833  which construct and link together the various operations are to be found
 834  in F<op.c>, and will be examined later.
 835  
 836  =item Optimization
 837  
 838  Now the parsing stage is complete, and the finished tree represents
 839  the operations that the Perl interpreter needs to perform to execute our
 840  program. Next, Perl does a dry run over the tree looking for
 841  optimisations: constant expressions such as C<3 + 4> will be computed
 842  now, and the optimizer will also see if any multiple operations can be
 843  replaced with a single one. For instance, to fetch the variable C<$foo>,
 844  instead of grabbing the glob C<*foo> and looking at the scalar
 845  component, the optimizer fiddles the op tree to use a function which
 846  directly looks up the scalar in question. The main optimizer is C<peep>
 847  in F<op.c>, and many ops have their own optimizing functions.
 848  
 849  =item Running
 850  
 851  Now we're finally ready to go: we have compiled Perl byte code, and all
 852  that's left to do is run it. The actual execution is done by the
 853  C<runops_standard> function in F<run.c>; more specifically, it's done by
 854  these three innocent looking lines:
 855  
 856      while ((PL_op = CALL_FPTR(PL_op->op_ppaddr)(aTHX))) {
 857          PERL_ASYNC_CHECK();
 858      }
 859  
 860  You may be more comfortable with the Perl version of that:
 861  
 862      PERL_ASYNC_CHECK() while $Perl::op = &{$Perl::op->{function}};
 863  
 864  Well, maybe not. Anyway, each op contains a function pointer, which
 865  stipulates the function which will actually carry out the operation.
 866  This function will return the next op in the sequence - this allows for
 867  things like C<if> which choose the next op dynamically at run time.
 868  The C<PERL_ASYNC_CHECK> makes sure that things like signals interrupt
 869  execution if required.
 870  
 871  The actual functions called are known as PP code, and they're spread
 872  between four files: F<pp_hot.c> contains the "hot" code, which is most
 873  often used and highly optimized, F<pp_sys.c> contains all the
 874  system-specific functions, F<pp_ctl.c> contains the functions which
 875  implement control structures (C<if>, C<while> and the like) and F<pp.c>
 876  contains everything else. These are, if you like, the C code for Perl's
 877  built-in functions and operators.
 878  
 879  Note that each C<pp_> function is expected to return a pointer to the next
 880  op. Calls to perl subs (and eval blocks) are handled within the same
 881  runops loop, and do not consume extra space on the C stack. For example,
 882  C<pp_entersub> and C<pp_entertry> just push a C<CxSUB> or C<CxEVAL> block
 883  struct onto the context stack which contain the address of the op
 884  following the sub call or eval. They then return the first op of that sub
 885  or eval block, and so execution continues of that sub or block.  Later, a
 886  C<pp_leavesub> or C<pp_leavetry> op pops the C<CxSUB> or C<CxEVAL>,
 887  retrieves the return op from it, and returns it.
 888  
 889  =item Exception handing
 890  
 891  Perl's exception handing (i.e. C<die> etc.) is built on top of the low-level
 892  C<setjmp()>/C<longjmp()> C-library functions. These basically provide a
 893  way to capture the current PC and SP registers and later restore them; i.e.
 894  a C<longjmp()> continues at the point in code where a previous C<setjmp()>
 895  was done, with anything further up on the C stack being lost. This is why
 896  code should always save values using C<SAVE_FOO> rather than in auto
 897  variables.
 898  
 899  The perl core wraps C<setjmp()> etc in the macros C<JMPENV_PUSH> and
 900  C<JMPENV_JUMP>. The basic rule of perl exceptions is that C<exit>, and
 901  C<die> (in the absence of C<eval>) perform a C<JMPENV_JUMP(2)>, while
 902  C<die> within C<eval> does a C<JMPENV_JUMP(3)>.
 903  
 904  At entry points to perl, such as C<perl_parse()>, C<perl_run()> and
 905  C<call_sv(cv, G_EVAL)> each does a C<JMPENV_PUSH>, then enter a runops
 906  loop or whatever, and handle possible exception returns. For a 2 return,
 907  final cleanup is performed, such as popping stacks and calling C<CHECK> or
 908  C<END> blocks. Amongst other things, this is how scope cleanup still
 909  occurs during an C<exit>.
 910  
 911  If a C<die> can find a C<CxEVAL> block on the context stack, then the
 912  stack is popped to that level and the return op in that block is assigned
 913  to C<PL_restartop>; then a C<JMPENV_JUMP(3)> is performed.  This normally
 914  passes control back to the guard. In the case of C<perl_run> and
 915  C<call_sv>, a non-null C<PL_restartop> triggers re-entry to the runops
 916  loop. The is the normal way that C<die> or C<croak> is handled within an
 917  C<eval>.
 918  
 919  Sometimes ops are executed within an inner runops loop, such as tie, sort
 920  or overload code. In this case, something like
 921  
 922      sub FETCH { eval { die } }
 923  
 924  would cause a longjmp right back to the guard in C<perl_run>, popping both
 925  runops loops, which is clearly incorrect. One way to avoid this is for the
 926  tie code to do a C<JMPENV_PUSH> before executing C<FETCH> in the inner
 927  runops loop, but for efficiency reasons, perl in fact just sets a flag,
 928  using C<CATCH_SET(TRUE)>. The C<pp_require>, C<pp_entereval> and
 929  C<pp_entertry> ops check this flag, and if true, they call C<docatch>,
 930  which does a C<JMPENV_PUSH> and starts a new runops level to execute the
 931  code, rather than doing it on the current loop.
 932  
 933  As a further optimisation, on exit from the eval block in the C<FETCH>,
 934  execution of the code following the block is still carried on in the inner
 935  loop.  When an exception is raised, C<docatch> compares the C<JMPENV>
 936  level of the C<CxEVAL> with C<PL_top_env> and if they differ, just
 937  re-throws the exception. In this way any inner loops get popped.
 938  
 939  Here's an example.
 940  
 941      1: eval { tie @a, 'A' };
 942      2: sub A::TIEARRAY {
 943      3:     eval { die };
 944      4:     die;
 945      5: }
 946  
 947  To run this code, C<perl_run> is called, which does a C<JMPENV_PUSH> then
 948  enters a runops loop. This loop executes the eval and tie ops on line 1,
 949  with the eval pushing a C<CxEVAL> onto the context stack.
 950  
 951  The C<pp_tie> does a C<CATCH_SET(TRUE)>, then starts a second runops loop
 952  to execute the body of C<TIEARRAY>. When it executes the entertry op on
 953  line 3, C<CATCH_GET> is true, so C<pp_entertry> calls C<docatch> which
 954  does a C<JMPENV_PUSH> and starts a third runops loop, which then executes
 955  the die op. At this point the C call stack looks like this:
 956  
 957      Perl_pp_die
 958      Perl_runops      # third loop
 959      S_docatch_body
 960      S_docatch
 961      Perl_pp_entertry
 962      Perl_runops      # second loop
 963      S_call_body
 964      Perl_call_sv
 965      Perl_pp_tie
 966      Perl_runops      # first loop
 967      S_run_body
 968      perl_run
 969      main
 970  
 971  and the context and data stacks, as shown by C<-Dstv>, look like:
 972  
 973      STACK 0: MAIN
 974        CX 0: BLOCK  =>
 975        CX 1: EVAL   => AV()  PV("A"\0)
 976        retop=leave
 977      STACK 1: MAGIC
 978        CX 0: SUB    =>
 979        retop=(null)
 980        CX 1: EVAL   => *
 981      retop=nextstate
 982  
 983  The die pops the first C<CxEVAL> off the context stack, sets
 984  C<PL_restartop> from it, does a C<JMPENV_JUMP(3)>, and control returns to
 985  the top C<docatch>. This then starts another third-level runops level,
 986  which executes the nextstate, pushmark and die ops on line 4. At the point
 987  that the second C<pp_die> is called, the C call stack looks exactly like
 988  that above, even though we are no longer within an inner eval; this is
 989  because of the optimization mentioned earlier. However, the context stack
 990  now looks like this, ie with the top CxEVAL popped:
 991  
 992      STACK 0: MAIN
 993        CX 0: BLOCK  =>
 994        CX 1: EVAL   => AV()  PV("A"\0)
 995        retop=leave
 996      STACK 1: MAGIC
 997        CX 0: SUB    =>
 998        retop=(null)
 999  
1000  The die on line 4 pops the context stack back down to the CxEVAL, leaving
1001  it as:
1002  
1003      STACK 0: MAIN
1004        CX 0: BLOCK  =>
1005  
1006  As usual, C<PL_restartop> is extracted from the C<CxEVAL>, and a
1007  C<JMPENV_JUMP(3)> done, which pops the C stack back to the docatch:
1008  
1009      S_docatch
1010      Perl_pp_entertry
1011      Perl_runops      # second loop
1012      S_call_body
1013      Perl_call_sv
1014      Perl_pp_tie
1015      Perl_runops      # first loop
1016      S_run_body
1017      perl_run
1018      main
1019  
1020  In  this case, because the C<JMPENV> level recorded in the C<CxEVAL>
1021  differs from the current one, C<docatch> just does a C<JMPENV_JUMP(3)>
1022  and the C stack unwinds to:
1023  
1024      perl_run
1025      main
1026  
1027  Because C<PL_restartop> is non-null, C<run_body> starts a new runops loop
1028  and execution continues.
1029  
1030  =back
1031  
1032  =head2 Internal Variable Types
1033  
1034  You should by now have had a look at L<perlguts>, which tells you about
1035  Perl's internal variable types: SVs, HVs, AVs and the rest. If not, do
1036  that now.
1037  
1038  These variables are used not only to represent Perl-space variables, but
1039  also any constants in the code, as well as some structures completely
1040  internal to Perl. The symbol table, for instance, is an ordinary Perl
1041  hash. Your code is represented by an SV as it's read into the parser;
1042  any program files you call are opened via ordinary Perl filehandles, and
1043  so on.
1044  
1045  The core L<Devel::Peek|Devel::Peek> module lets us examine SVs from a
1046  Perl program. Let's see, for instance, how Perl treats the constant
1047  C<"hello">.
1048  
1049        % perl -MDevel::Peek -e 'Dump("hello")'
1050      1 SV = PV(0xa041450) at 0xa04ecbc
1051      2   REFCNT = 1
1052      3   FLAGS = (POK,READONLY,pPOK)
1053      4   PV = 0xa0484e0 "hello"\0
1054      5   CUR = 5
1055      6   LEN = 6
1056  
1057  Reading C<Devel::Peek> output takes a bit of practise, so let's go
1058  through it line by line.
1059  
1060  Line 1 tells us we're looking at an SV which lives at C<0xa04ecbc> in
1061  memory. SVs themselves are very simple structures, but they contain a
1062  pointer to a more complex structure. In this case, it's a PV, a
1063  structure which holds a string value, at location C<0xa041450>.  Line 2
1064  is the reference count; there are no other references to this data, so
1065  it's 1.
1066  
1067  Line 3 are the flags for this SV - it's OK to use it as a PV, it's a
1068  read-only SV (because it's a constant) and the data is a PV internally.
1069  Next we've got the contents of the string, starting at location
1070  C<0xa0484e0>.
1071  
1072  Line 5 gives us the current length of the string - note that this does
1073  B<not> include the null terminator. Line 6 is not the length of the
1074  string, but the length of the currently allocated buffer; as the string
1075  grows, Perl automatically extends the available storage via a routine
1076  called C<SvGROW>.
1077  
1078  You can get at any of these quantities from C very easily; just add
1079  C<Sv> to the name of the field shown in the snippet, and you've got a
1080  macro which will return the value: C<SvCUR(sv)> returns the current
1081  length of the string, C<SvREFCOUNT(sv)> returns the reference count,
1082  C<SvPV(sv, len)> returns the string itself with its length, and so on.
1083  More macros to manipulate these properties can be found in L<perlguts>.
1084  
1085  Let's take an example of manipulating a PV, from C<sv_catpvn>, in F<sv.c>
1086  
1087       1  void
1088       2  Perl_sv_catpvn(pTHX_ register SV *sv, register const char *ptr, register STRLEN len)
1089       3  {
1090       4      STRLEN tlen;
1091       5      char *junk;
1092  
1093       6      junk = SvPV_force(sv, tlen);
1094       7      SvGROW(sv, tlen + len + 1);
1095       8      if (ptr == junk)
1096       9          ptr = SvPVX(sv);
1097      10      Move(ptr,SvPVX(sv)+tlen,len,char);
1098      11      SvCUR(sv) += len;
1099      12      *SvEND(sv) = '\0';
1100      13      (void)SvPOK_only_UTF8(sv);          /* validate pointer */
1101      14      SvTAINT(sv);
1102      15  }
1103  
1104  This is a function which adds a string, C<ptr>, of length C<len> onto
1105  the end of the PV stored in C<sv>. The first thing we do in line 6 is
1106  make sure that the SV B<has> a valid PV, by calling the C<SvPV_force>
1107  macro to force a PV. As a side effect, C<tlen> gets set to the current
1108  value of the PV, and the PV itself is returned to C<junk>.
1109  
1110  In line 7, we make sure that the SV will have enough room to accommodate
1111  the old string, the new string and the null terminator. If C<LEN> isn't
1112  big enough, C<SvGROW> will reallocate space for us.
1113  
1114  Now, if C<junk> is the same as the string we're trying to add, we can
1115  grab the string directly from the SV; C<SvPVX> is the address of the PV
1116  in the SV.
1117  
1118  Line 10 does the actual catenation: the C<Move> macro moves a chunk of
1119  memory around: we move the string C<ptr> to the end of the PV - that's
1120  the start of the PV plus its current length. We're moving C<len> bytes
1121  of type C<char>. After doing so, we need to tell Perl we've extended the
1122  string, by altering C<CUR> to reflect the new length. C<SvEND> is a
1123  macro which gives us the end of the string, so that needs to be a
1124  C<"\0">.
1125  
1126  Line 13 manipulates the flags; since we've changed the PV, any IV or NV
1127  values will no longer be valid: if we have C<$a=10; $a.="6";> we don't
1128  want to use the old IV of 10. C<SvPOK_only_utf8> is a special UTF-8-aware
1129  version of C<SvPOK_only>, a macro which turns off the IOK and NOK flags
1130  and turns on POK. The final C<SvTAINT> is a macro which launders tainted
1131  data if taint mode is turned on.
1132  
1133  AVs and HVs are more complicated, but SVs are by far the most common
1134  variable type being thrown around. Having seen something of how we
1135  manipulate these, let's go on and look at how the op tree is
1136  constructed.
1137  
1138  =head2 Op Trees
1139  
1140  First, what is the op tree, anyway? The op tree is the parsed
1141  representation of your program, as we saw in our section on parsing, and
1142  it's the sequence of operations that Perl goes through to execute your
1143  program, as we saw in L</Running>.
1144  
1145  An op is a fundamental operation that Perl can perform: all the built-in
1146  functions and operators are ops, and there are a series of ops which
1147  deal with concepts the interpreter needs internally - entering and
1148  leaving a block, ending a statement, fetching a variable, and so on.
1149  
1150  The op tree is connected in two ways: you can imagine that there are two
1151  "routes" through it, two orders in which you can traverse the tree.
1152  First, parse order reflects how the parser understood the code, and
1153  secondly, execution order tells perl what order to perform the
1154  operations in.
1155  
1156  The easiest way to examine the op tree is to stop Perl after it has
1157  finished parsing, and get it to dump out the tree. This is exactly what
1158  the compiler backends L<B::Terse|B::Terse>, L<B::Concise|B::Concise>
1159  and L<B::Debug|B::Debug> do.
1160  
1161  Let's have a look at how Perl sees C<$a = $b + $c>:
1162  
1163       % perl -MO=Terse -e '$a=$b+$c'
1164       1  LISTOP (0x8179888) leave
1165       2      OP (0x81798b0) enter
1166       3      COP (0x8179850) nextstate
1167       4      BINOP (0x8179828) sassign
1168       5          BINOP (0x8179800) add [1]
1169       6              UNOP (0x81796e0) null [15]
1170       7                  SVOP (0x80fafe0) gvsv  GV (0x80fa4cc) *b
1171       8              UNOP (0x81797e0) null [15]
1172       9                  SVOP (0x8179700) gvsv  GV (0x80efeb0) *c
1173      10          UNOP (0x816b4f0) null [15]
1174      11              SVOP (0x816dcf0) gvsv  GV (0x80fa460) *a
1175  
1176  Let's start in the middle, at line 4. This is a BINOP, a binary
1177  operator, which is at location C<0x8179828>. The specific operator in
1178  question is C<sassign> - scalar assignment - and you can find the code
1179  which implements it in the function C<pp_sassign> in F<pp_hot.c>. As a
1180  binary operator, it has two children: the add operator, providing the
1181  result of C<$b+$c>, is uppermost on line 5, and the left hand side is on
1182  line 10.
1183  
1184  Line 10 is the null op: this does exactly nothing. What is that doing
1185  there? If you see the null op, it's a sign that something has been
1186  optimized away after parsing. As we mentioned in L</Optimization>,
1187  the optimization stage sometimes converts two operations into one, for
1188  example when fetching a scalar variable. When this happens, instead of
1189  rewriting the op tree and cleaning up the dangling pointers, it's easier
1190  just to replace the redundant operation with the null op. Originally,
1191  the tree would have looked like this:
1192  
1193      10          SVOP (0x816b4f0) rv2sv [15]
1194      11              SVOP (0x816dcf0) gv  GV (0x80fa460) *a
1195  
1196  That is, fetch the C<a> entry from the main symbol table, and then look
1197  at the scalar component of it: C<gvsv> (C<pp_gvsv> into F<pp_hot.c>)
1198  happens to do both these things.
1199  
1200  The right hand side, starting at line 5 is similar to what we've just
1201  seen: we have the C<add> op (C<pp_add> also in F<pp_hot.c>) add together
1202  two C<gvsv>s.
1203  
1204  Now, what's this about?
1205  
1206       1  LISTOP (0x8179888) leave
1207       2      OP (0x81798b0) enter
1208       3      COP (0x8179850) nextstate
1209  
1210  C<enter> and C<leave> are scoping ops, and their job is to perform any
1211  housekeeping every time you enter and leave a block: lexical variables
1212  are tidied up, unreferenced variables are destroyed, and so on. Every
1213  program will have those first three lines: C<leave> is a list, and its
1214  children are all the statements in the block. Statements are delimited
1215  by C<nextstate>, so a block is a collection of C<nextstate> ops, with
1216  the ops to be performed for each statement being the children of
1217  C<nextstate>. C<enter> is a single op which functions as a marker.
1218  
1219  That's how Perl parsed the program, from top to bottom:
1220  
1221                          Program
1222                             |
1223                         Statement
1224                             |
1225                             =
1226                            / \
1227                           /   \
1228                          $a   +
1229                              / \
1230                            $b   $c
1231  
1232  However, it's impossible to B<perform> the operations in this order:
1233  you have to find the values of C<$b> and C<$c> before you add them
1234  together, for instance. So, the other thread that runs through the op
1235  tree is the execution order: each op has a field C<op_next> which points
1236  to the next op to be run, so following these pointers tells us how perl
1237  executes the code. We can traverse the tree in this order using
1238  the C<exec> option to C<B::Terse>:
1239  
1240       % perl -MO=Terse,exec -e '$a=$b+$c'
1241       1  OP (0x8179928) enter
1242       2  COP (0x81798c8) nextstate
1243       3  SVOP (0x81796c8) gvsv  GV (0x80fa4d4) *b
1244       4  SVOP (0x8179798) gvsv  GV (0x80efeb0) *c
1245       5  BINOP (0x8179878) add [1]
1246       6  SVOP (0x816dd38) gvsv  GV (0x80fa468) *a
1247       7  BINOP (0x81798a0) sassign
1248       8  LISTOP (0x8179900) leave
1249  
1250  This probably makes more sense for a human: enter a block, start a
1251  statement. Get the values of C<$b> and C<$c>, and add them together.
1252  Find C<$a>, and assign one to the other. Then leave.
1253  
1254  The way Perl builds up these op trees in the parsing process can be
1255  unravelled by examining F<perly.y>, the YACC grammar. Let's take the
1256  piece we need to construct the tree for C<$a = $b + $c>
1257  
1258      1 term    :   term ASSIGNOP term
1259      2                { $$ = newASSIGNOP(OPf_STACKED, $1, $2, $3); }
1260      3         |   term ADDOP term
1261      4                { $$ = newBINOP($2, 0, scalar($1), scalar($3)); }
1262  
1263  If you're not used to reading BNF grammars, this is how it works: You're
1264  fed certain things by the tokeniser, which generally end up in upper
1265  case. Here, C<ADDOP>, is provided when the tokeniser sees C<+> in your
1266  code. C<ASSIGNOP> is provided when C<=> is used for assigning. These are
1267  "terminal symbols", because you can't get any simpler than them.
1268  
1269  The grammar, lines one and three of the snippet above, tells you how to
1270  build up more complex forms. These complex forms, "non-terminal symbols"
1271  are generally placed in lower case. C<term> here is a non-terminal
1272  symbol, representing a single expression.
1273  
1274  The grammar gives you the following rule: you can make the thing on the
1275  left of the colon if you see all the things on the right in sequence.
1276  This is called a "reduction", and the aim of parsing is to completely
1277  reduce the input. There are several different ways you can perform a
1278  reduction, separated by vertical bars: so, C<term> followed by C<=>
1279  followed by C<term> makes a C<term>, and C<term> followed by C<+>
1280  followed by C<term> can also make a C<term>.
1281  
1282  So, if you see two terms with an C<=> or C<+>, between them, you can
1283  turn them into a single expression. When you do this, you execute the
1284  code in the block on the next line: if you see C<=>, you'll do the code
1285  in line 2. If you see C<+>, you'll do the code in line 4. It's this code
1286  which contributes to the op tree.
1287  
1288              |   term ADDOP term
1289              { $$ = newBINOP($2, 0, scalar($1), scalar($3)); }
1290  
1291  What this does is creates a new binary op, and feeds it a number of
1292  variables. The variables refer to the tokens: C<$1> is the first token in
1293  the input, C<$2> the second, and so on - think regular expression
1294  backreferences. C<$$> is the op returned from this reduction. So, we
1295  call C<newBINOP> to create a new binary operator. The first parameter to
1296  C<newBINOP>, a function in F<op.c>, is the op type. It's an addition
1297  operator, so we want the type to be C<ADDOP>. We could specify this
1298  directly, but it's right there as the second token in the input, so we
1299  use C<$2>. The second parameter is the op's flags: 0 means "nothing
1300  special". Then the things to add: the left and right hand side of our
1301  expression, in scalar context.
1302  
1303  =head2 Stacks
1304  
1305  When perl executes something like C<addop>, how does it pass on its
1306  results to the next op? The answer is, through the use of stacks. Perl
1307  has a number of stacks to store things it's currently working on, and
1308  we'll look at the three most important ones here.
1309  
1310  =over 3
1311  
1312  =item Argument stack
1313  
1314  Arguments are passed to PP code and returned from PP code using the
1315  argument stack, C<ST>. The typical way to handle arguments is to pop
1316  them off the stack, deal with them how you wish, and then push the result
1317  back onto the stack. This is how, for instance, the cosine operator
1318  works:
1319  
1320        NV value;
1321        value = POPn;
1322        value = Perl_cos(value);
1323        XPUSHn(value);
1324  
1325  We'll see a more tricky example of this when we consider Perl's macros
1326  below. C<POPn> gives you the NV (floating point value) of the top SV on
1327  the stack: the C<$x> in C<cos($x)>. Then we compute the cosine, and push
1328  the result back as an NV. The C<X> in C<XPUSHn> means that the stack
1329  should be extended if necessary - it can't be necessary here, because we
1330  know there's room for one more item on the stack, since we've just
1331  removed one! The C<XPUSH*> macros at least guarantee safety.
1332  
1333  Alternatively, you can fiddle with the stack directly: C<SP> gives you
1334  the first element in your portion of the stack, and C<TOP*> gives you
1335  the top SV/IV/NV/etc. on the stack. So, for instance, to do unary
1336  negation of an integer:
1337  
1338       SETi(-TOPi);
1339  
1340  Just set the integer value of the top stack entry to its negation.
1341  
1342  Argument stack manipulation in the core is exactly the same as it is in
1343  XSUBs - see L<perlxstut>, L<perlxs> and L<perlguts> for a longer
1344  description of the macros used in stack manipulation.
1345  
1346  =item Mark stack
1347  
1348  I say "your portion of the stack" above because PP code doesn't
1349  necessarily get the whole stack to itself: if your function calls
1350  another function, you'll only want to expose the arguments aimed for the
1351  called function, and not (necessarily) let it get at your own data. The
1352  way we do this is to have a "virtual" bottom-of-stack, exposed to each
1353  function. The mark stack keeps bookmarks to locations in the argument
1354  stack usable by each function. For instance, when dealing with a tied
1355  variable, (internally, something with "P" magic) Perl has to call
1356  methods for accesses to the tied variables. However, we need to separate
1357  the arguments exposed to the method to the argument exposed to the
1358  original function - the store or fetch or whatever it may be. Here's
1359  roughly how the tied C<push> is implemented; see C<av_push> in F<av.c>:
1360  
1361       1    PUSHMARK(SP);
1362       2    EXTEND(SP,2);
1363       3    PUSHs(SvTIED_obj((SV*)av, mg));
1364       4    PUSHs(val);
1365       5    PUTBACK;
1366       6    ENTER;
1367       7    call_method("PUSH", G_SCALAR|G_DISCARD);
1368       8    LEAVE;
1369  
1370  Let's examine the whole implementation, for practice:
1371  
1372       1    PUSHMARK(SP);
1373  
1374  Push the current state of the stack pointer onto the mark stack. This is
1375  so that when we've finished adding items to the argument stack, Perl
1376  knows how many things we've added recently.
1377  
1378       2    EXTEND(SP,2);
1379       3    PUSHs(SvTIED_obj((SV*)av, mg));
1380       4    PUSHs(val);
1381  
1382  We're going to add two more items onto the argument stack: when you have
1383  a tied array, the C<PUSH> subroutine receives the object and the value
1384  to be pushed, and that's exactly what we have here - the tied object,
1385  retrieved with C<SvTIED_obj>, and the value, the SV C<val>.
1386  
1387       5    PUTBACK;
1388  
1389  Next we tell Perl to update the global stack pointer from our internal
1390  variable: C<dSP> only gave us a local copy, not a reference to the global.
1391  
1392       6    ENTER;
1393       7    call_method("PUSH", G_SCALAR|G_DISCARD);
1394       8    LEAVE;
1395  
1396  C<ENTER> and C<LEAVE> localise a block of code - they make sure that all
1397  variables are tidied up, everything that has been localised gets
1398  its previous value returned, and so on. Think of them as the C<{> and
1399  C<}> of a Perl block.
1400  
1401  To actually do the magic method call, we have to call a subroutine in
1402  Perl space: C<call_method> takes care of that, and it's described in
1403  L<perlcall>. We call the C<PUSH> method in scalar context, and we're
1404  going to discard its return value.  The call_method() function
1405  removes the top element of the mark stack, so there is nothing for
1406  the caller to clean up.
1407  
1408  =item Save stack
1409  
1410  C doesn't have a concept of local scope, so perl provides one. We've
1411  seen that C<ENTER> and C<LEAVE> are used as scoping braces; the save
1412  stack implements the C equivalent of, for example:
1413  
1414      {
1415          local $foo = 42;
1416          ...
1417      }
1418  
1419  See L<perlguts/Localising Changes> for how to use the save stack.
1420  
1421  =back
1422  
1423  =head2 Millions of Macros
1424  
1425  One thing you'll notice about the Perl source is that it's full of
1426  macros. Some have called the pervasive use of macros the hardest thing
1427  to understand, others find it adds to clarity. Let's take an example,
1428  the code which implements the addition operator:
1429  
1430     1  PP(pp_add)
1431     2  {
1432     3      dSP; dATARGET; tryAMAGICbin(add,opASSIGN);
1433     4      {
1434     5        dPOPTOPnnrl_ul;
1435     6        SETn( left + right );
1436     7        RETURN;
1437     8      }
1438     9  }
1439  
1440  Every line here (apart from the braces, of course) contains a macro. The
1441  first line sets up the function declaration as Perl expects for PP code;
1442  line 3 sets up variable declarations for the argument stack and the
1443  target, the return value of the operation. Finally, it tries to see if
1444  the addition operation is overloaded; if so, the appropriate subroutine
1445  is called.
1446  
1447  Line 5 is another variable declaration - all variable declarations start
1448  with C<d> - which pops from the top of the argument stack two NVs (hence
1449  C<nn>) and puts them into the variables C<right> and C<left>, hence the
1450  C<rl>. These are the two operands to the addition operator. Next, we
1451  call C<SETn> to set the NV of the return value to the result of adding
1452  the two values. This done, we return - the C<RETURN> macro makes sure
1453  that our return value is properly handled, and we pass the next operator
1454  to run back to the main run loop.
1455  
1456  Most of these macros are explained in L<perlapi>, and some of the more
1457  important ones are explained in L<perlxs> as well. Pay special attention
1458  to L<perlguts/Background and PERL_IMPLICIT_CONTEXT> for information on
1459  the C<[pad]THX_?> macros.
1460  
1461  =head2 The .i Targets
1462  
1463  You can expand the macros in a F<foo.c> file by saying
1464  
1465      make foo.i
1466  
1467  which will expand the macros using cpp.  Don't be scared by the results.
1468  
1469  =head1 SOURCE CODE STATIC ANALYSIS
1470  
1471  Various tools exist for analysing C source code B<statically>, as
1472  opposed to B<dynamically>, that is, without executing the code.
1473  It is possible to detect resource leaks, undefined behaviour, type
1474  mismatches, portability problems, code paths that would cause illegal
1475  memory accesses, and other similar problems by just parsing the C code
1476  and looking at the resulting graph, what does it tell about the
1477  execution and data flows.  As a matter of fact, this is exactly
1478  how C compilers know to give warnings about dubious code.
1479  
1480  =head2 lint, splint
1481  
1482  The good old C code quality inspector, C<lint>, is available in
1483  several platforms, but please be aware that there are several
1484  different implementations of it by different vendors, which means that
1485  the flags are not identical across different platforms.
1486  
1487  There is a lint variant called C<splint> (Secure Programming Lint)
1488  available from http://www.splint.org/ that should compile on any
1489  Unix-like platform.
1490  
1491  There are C<lint> and <splint> targets in Makefile, but you may have
1492  to diddle with the flags (see above).
1493  
1494  =head2 Coverity
1495  
1496  Coverity (http://www.coverity.com/) is a product similar to lint and
1497  as a testbed for their product they periodically check several open
1498  source projects, and they give out accounts to open source developers
1499  to the defect databases.
1500  
1501  =head2 cpd (cut-and-paste detector)
1502  
1503  The cpd tool detects cut-and-paste coding.  If one instance of the
1504  cut-and-pasted code changes, all the other spots should probably be
1505  changed, too.  Therefore such code should probably be turned into a
1506  subroutine or a macro.
1507  
1508  cpd (http://pmd.sourceforge.net/cpd.html) is part of the pmd project
1509  (http://pmd.sourceforge.net/).  pmd was originally written for static
1510  analysis of Java code, but later the cpd part of it was extended to
1511  parse also C and C++.
1512  
1513  Download the pmd-bin-X.Y.zip () from the SourceForge site, extract the
1514  pmd-X.Y.jar from it, and then run that on source code thusly:
1515  
1516    java -cp pmd-X.Y.jar net.sourceforge.pmd.cpd.CPD --minimum-tokens 100 --files /some/where/src --language c > cpd.txt
1517  
1518  You may run into memory limits, in which case you should use the -Xmx option:
1519  
1520    java -Xmx512M ...
1521  
1522  =head2 gcc warnings
1523  
1524  Though much can be written about the inconsistency and coverage
1525  problems of gcc warnings (like C<-Wall> not meaning "all the
1526  warnings", or some common portability problems not being covered by
1527  C<-Wall>, or C<-ansi> and C<-pedantic> both being a poorly defined
1528  collection of warnings, and so forth), gcc is still a useful tool in
1529  keeping our coding nose clean.
1530  
1531  The C<-Wall> is by default on.
1532  
1533  The C<-ansi> (and its sidekick, C<-pedantic>) would be nice to be on
1534  always, but unfortunately they are not safe on all platforms, they can
1535  for example cause fatal conflicts with the system headers (Solaris
1536  being a prime example).  If Configure C<-Dgccansipedantic> is used,
1537  the C<cflags> frontend selects C<-ansi -pedantic> for the platforms
1538  where they are known to be safe.
1539  
1540  Starting from Perl 5.9.4 the following extra flags are added:
1541  
1542  =over 4
1543  
1544  =item *
1545  
1546  C<-Wendif-labels>
1547  
1548  =item *
1549  
1550  C<-Wextra>
1551  
1552  =item *
1553  
1554  C<-Wdeclaration-after-statement>
1555  
1556  =back
1557  
1558  The following flags would be nice to have but they would first need
1559  their own Augean stablemaster:
1560  
1561  =over 4
1562  
1563  =item *
1564  
1565  C<-Wpointer-arith>
1566  
1567  =item *
1568  
1569  C<-Wshadow>
1570  
1571  =item *
1572  
1573  C<-Wstrict-prototypes>
1574  
1575  =back
1576  
1577  The C<-Wtraditional> is another example of the annoying tendency of
1578  gcc to bundle a lot of warnings under one switch -- it would be
1579  impossible to deploy in practice because it would complain a lot -- but
1580  it does contain some warnings that would be beneficial to have available
1581  on their own, such as the warning about string constants inside macros
1582  containing the macro arguments: this behaved differently pre-ANSI
1583  than it does in ANSI, and some C compilers are still in transition,
1584  AIX being an example.
1585  
1586  =head2 Warnings of other C compilers
1587  
1588  Other C compilers (yes, there B<are> other C compilers than gcc) often
1589  have their "strict ANSI" or "strict ANSI with some portability extensions"
1590  modes on, like for example the Sun Workshop has its C<-Xa> mode on
1591  (though implicitly), or the DEC (these days, HP...) has its C<-std1>
1592  mode on.
1593  
1594  =head2 DEBUGGING
1595  
1596  You can compile a special debugging version of Perl, which allows you
1597  to use the C<-D> option of Perl to tell more about what Perl is doing.
1598  But sometimes there is no alternative than to dive in with a debugger,
1599  either to see the stack trace of a core dump (very useful in a bug
1600  report), or trying to figure out what went wrong before the core dump
1601  happened, or how did we end up having wrong or unexpected results.
1602  
1603  =head2 Poking at Perl
1604  
1605  To really poke around with Perl, you'll probably want to build Perl for
1606  debugging, like this:
1607  
1608      ./Configure -d -D optimize=-g
1609      make
1610  
1611  C<-g> is a flag to the C compiler to have it produce debugging
1612  information which will allow us to step through a running program,
1613  and to see in which C function we are at (without the debugging
1614  information we might see only the numerical addresses of the functions,
1615  which is not very helpful).
1616  
1617  F<Configure> will also turn on the C<DEBUGGING> compilation symbol which
1618  enables all the internal debugging code in Perl. There are a whole bunch
1619  of things you can debug with this: L<perlrun> lists them all, and the
1620  best way to find out about them is to play about with them. The most
1621  useful options are probably
1622  
1623      l  Context (loop) stack processing
1624      t  Trace execution
1625      o  Method and overloading resolution
1626      c  String/numeric conversions
1627  
1628  Some of the functionality of the debugging code can be achieved using XS
1629  modules.
1630  
1631      -Dr => use re 'debug'
1632      -Dx => use O 'Debug'
1633  
1634  =head2 Using a source-level debugger
1635  
1636  If the debugging output of C<-D> doesn't help you, it's time to step
1637  through perl's execution with a source-level debugger.
1638  
1639  =over 3
1640  
1641  =item *
1642  
1643  We'll use C<gdb> for our examples here; the principles will apply to
1644  any debugger (many vendors call their debugger C<dbx>), but check the
1645  manual of the one you're using.
1646  
1647  =back
1648  
1649  To fire up the debugger, type
1650  
1651      gdb ./perl
1652  
1653  Or if you have a core dump:
1654  
1655      gdb ./perl core
1656  
1657  You'll want to do that in your Perl source tree so the debugger can read
1658  the source code. You should see the copyright message, followed by the
1659  prompt.
1660  
1661      (gdb)
1662  
1663  C<help> will get you into the documentation, but here are the most
1664  useful commands:
1665  
1666  =over 3
1667  
1668  =item run [args]
1669  
1670  Run the program with the given arguments.
1671  
1672  =item break function_name
1673  
1674  =item break source.c:xxx
1675  
1676  Tells the debugger that we'll want to pause execution when we reach
1677  either the named function (but see L<perlguts/Internal Functions>!) or the given
1678  line in the named source file.
1679  
1680  =item step
1681  
1682  Steps through the program a line at a time.
1683  
1684  =item next
1685  
1686  Steps through the program a line at a time, without descending into
1687  functions.
1688  
1689  =item continue
1690  
1691  Run until the next breakpoint.
1692  
1693  =item finish
1694  
1695  Run until the end of the current function, then stop again.
1696  
1697  =item 'enter'
1698  
1699  Just pressing Enter will do the most recent operation again - it's a
1700  blessing when stepping through miles of source code.
1701  
1702  =item print
1703  
1704  Execute the given C code and print its results. B<WARNING>: Perl makes
1705  heavy use of macros, and F<gdb> does not necessarily support macros
1706  (see later L</"gdb macro support">).  You'll have to substitute them
1707  yourself, or to invoke cpp on the source code files
1708  (see L</"The .i Targets">)
1709  So, for instance, you can't say
1710  
1711      print SvPV_nolen(sv)
1712  
1713  but you have to say
1714  
1715      print Perl_sv_2pv_nolen(sv)
1716  
1717  =back
1718  
1719  You may find it helpful to have a "macro dictionary", which you can
1720  produce by saying C<cpp -dM perl.c | sort>. Even then, F<cpp> won't
1721  recursively apply those macros for you.
1722  
1723  =head2 gdb macro support
1724  
1725  Recent versions of F<gdb> have fairly good macro support, but
1726  in order to use it you'll need to compile perl with macro definitions
1727  included in the debugging information.  Using F<gcc> version 3.1, this
1728  means configuring with C<-Doptimize=-g3>.  Other compilers might use a
1729  different switch (if they support debugging macros at all).
1730  
1731  =head2 Dumping Perl Data Structures
1732  
1733  One way to get around this macro hell is to use the dumping functions in
1734  F<dump.c>; these work a little like an internal
1735  L<Devel::Peek|Devel::Peek>, but they also cover OPs and other structures
1736  that you can't get at from Perl. Let's take an example. We'll use the
1737  C<$a = $b + $c> we used before, but give it a bit of context:
1738  C<$b = "6XXXX"; $c = 2.3;>. Where's a good place to stop and poke around?
1739  
1740  What about C<pp_add>, the function we examined earlier to implement the
1741  C<+> operator:
1742  
1743      (gdb) break Perl_pp_add
1744      Breakpoint 1 at 0x46249f: file pp_hot.c, line 309.
1745  
1746  Notice we use C<Perl_pp_add> and not C<pp_add> - see L<perlguts/Internal Functions>.
1747  With the breakpoint in place, we can run our program:
1748  
1749      (gdb) run -e '$b = "6XXXX"; $c = 2.3; $a = $b + $c'
1750  
1751  Lots of junk will go past as gdb reads in the relevant source files and
1752  libraries, and then:
1753  
1754      Breakpoint 1, Perl_pp_add () at pp_hot.c:309
1755      309         dSP; dATARGET; tryAMAGICbin(add,opASSIGN);
1756      (gdb) step
1757      311           dPOPTOPnnrl_ul;
1758      (gdb)
1759  
1760  We looked at this bit of code before, and we said that C<dPOPTOPnnrl_ul>
1761  arranges for two C<NV>s to be placed into C<left> and C<right> - let's
1762  slightly expand it:
1763  
1764      #define dPOPTOPnnrl_ul  NV right = POPn; \
1765                              SV *leftsv = TOPs; \
1766                              NV left = USE_LEFT(leftsv) ? SvNV(leftsv) : 0.0
1767  
1768  C<POPn> takes the SV from the top of the stack and obtains its NV either
1769  directly (if C<SvNOK> is set) or by calling the C<sv_2nv> function.
1770  C<TOPs> takes the next SV from the top of the stack - yes, C<POPn> uses
1771  C<TOPs> - but doesn't remove it. We then use C<SvNV> to get the NV from
1772  C<leftsv> in the same way as before - yes, C<POPn> uses C<SvNV>.
1773  
1774  Since we don't have an NV for C<$b>, we'll have to use C<sv_2nv> to
1775  convert it. If we step again, we'll find ourselves there:
1776  
1777      Perl_sv_2nv (sv=0xa0675d0) at sv.c:1669
1778      1669        if (!sv)
1779      (gdb)
1780  
1781  We can now use C<Perl_sv_dump> to investigate the SV:
1782  
1783      SV = PV(0xa057cc0) at 0xa0675d0
1784      REFCNT = 1
1785      FLAGS = (POK,pPOK)
1786      PV = 0xa06a510 "6XXXX"\0
1787      CUR = 5
1788      LEN = 6
1789      $1 = void
1790  
1791  We know we're going to get C<6> from this, so let's finish the
1792  subroutine:
1793  
1794      (gdb) finish
1795      Run till exit from #0  Perl_sv_2nv (sv=0xa0675d0) at sv.c:1671
1796      0x462669 in Perl_pp_add () at pp_hot.c:311
1797      311           dPOPTOPnnrl_ul;
1798  
1799  We can also dump out this op: the current op is always stored in
1800  C<PL_op>, and we can dump it with C<Perl_op_dump>. This'll give us
1801  similar output to L<B::Debug|B::Debug>.
1802  
1803      {
1804      13  TYPE = add  ===> 14
1805          TARG = 1
1806          FLAGS = (SCALAR,KIDS)
1807          {
1808              TYPE = null  ===> (12)
1809                (was rv2sv)
1810              FLAGS = (SCALAR,KIDS)
1811              {
1812      11          TYPE = gvsv  ===> 12
1813                  FLAGS = (SCALAR)
1814                  GV = main::b
1815              }
1816          }
1817  
1818  # finish this later #
1819  
1820  =head2 Patching
1821  
1822  All right, we've now had a look at how to navigate the Perl sources and
1823  some things you'll need to know when fiddling with them. Let's now get
1824  on and create a simple patch. Here's something Larry suggested: if a
1825  C<U> is the first active format during a C<pack>, (for example,
1826  C<pack "U3C8", @stuff>) then the resulting string should be treated as
1827  UTF-8 encoded.
1828  
1829  How do we prepare to fix this up? First we locate the code in question -
1830  the C<pack> happens at runtime, so it's going to be in one of the F<pp>
1831  files. Sure enough, C<pp_pack> is in F<pp.c>. Since we're going to be
1832  altering this file, let's copy it to F<pp.c~>.
1833  
1834  [Well, it was in F<pp.c> when this tutorial was written. It has now been
1835  split off with C<pp_unpack> to its own file, F<pp_pack.c>]
1836  
1837  Now let's look over C<pp_pack>: we take a pattern into C<pat>, and then
1838  loop over the pattern, taking each format character in turn into
1839  C<datum_type>. Then for each possible format character, we swallow up
1840  the other arguments in the pattern (a field width, an asterisk, and so
1841  on) and convert the next chunk input into the specified format, adding
1842  it onto the output SV C<cat>.
1843  
1844  How do we know if the C<U> is the first format in the C<pat>? Well, if
1845  we have a pointer to the start of C<pat> then, if we see a C<U> we can
1846  test whether we're still at the start of the string. So, here's where
1847  C<pat> is set up:
1848  
1849      STRLEN fromlen;
1850      register char *pat = SvPVx(*++MARK, fromlen);
1851      register char *patend = pat + fromlen;
1852      register I32 len;
1853      I32 datumtype;
1854      SV *fromstr;
1855  
1856  We'll have another string pointer in there:
1857  
1858      STRLEN fromlen;
1859      register char *pat = SvPVx(*++MARK, fromlen);
1860      register char *patend = pat + fromlen;
1861   +  char *patcopy;
1862      register I32 len;
1863      I32 datumtype;
1864      SV *fromstr;
1865  
1866  And just before we start the loop, we'll set C<patcopy> to be the start
1867  of C<pat>:
1868  
1869      items = SP - MARK;
1870      MARK++;
1871      sv_setpvn(cat, "", 0);
1872   +  patcopy = pat;
1873      while (pat < patend) {
1874  
1875  Now if we see a C<U> which was at the start of the string, we turn on
1876  the C<UTF8> flag for the output SV, C<cat>:
1877  
1878   +  if (datumtype == 'U' && pat==patcopy+1)
1879   +      SvUTF8_on(cat);
1880      if (datumtype == '#') {
1881          while (pat < patend && *pat != '\n')
1882              pat++;
1883  
1884  Remember that it has to be C<patcopy+1> because the first character of
1885  the string is the C<U> which has been swallowed into C<datumtype!>
1886  
1887  Oops, we forgot one thing: what if there are spaces at the start of the
1888  pattern? C<pack("  U*", @stuff)> will have C<U> as the first active
1889  character, even though it's not the first thing in the pattern. In this
1890  case, we have to advance C<patcopy> along with C<pat> when we see spaces:
1891  
1892      if (isSPACE(datumtype))
1893          continue;
1894  
1895  needs to become
1896  
1897      if (isSPACE(datumtype)) {
1898          patcopy++;
1899          continue;
1900      }
1901  
1902  OK. That's the C part done. Now we must do two additional things before
1903  this patch is ready to go: we've changed the behaviour of Perl, and so
1904  we must document that change. We must also provide some more regression
1905  tests to make sure our patch works and doesn't create a bug somewhere
1906  else along the line.
1907  
1908  The regression tests for each operator live in F<t/op/>, and so we
1909  make a copy of F<t/op/pack.t> to F<t/op/pack.t~>. Now we can add our
1910  tests to the end. First, we'll test that the C<U> does indeed create
1911  Unicode strings.
1912  
1913  t/op/pack.t has a sensible ok() function, but if it didn't we could
1914  use the one from t/test.pl.
1915  
1916   require './test.pl';
1917   plan( tests => 159 );
1918  
1919  so instead of this:
1920  
1921   print 'not ' unless "1.20.300.4000" eq sprintf "%vd", pack("U*",1,20,300,4000);
1922   print "ok $test\n"; $test++;
1923  
1924  we can write the more sensible (see L<Test::More> for a full
1925  explanation of is() and other testing functions).
1926  
1927   is( "1.20.300.4000", sprintf "%vd", pack("U*",1,20,300,4000),
1928                                         "U* produces Unicode" );
1929  
1930  Now we'll test that we got that space-at-the-beginning business right:
1931  
1932   is( "1.20.300.4000", sprintf "%vd", pack("  U*",1,20,300,4000),
1933                                         "  with spaces at the beginning" );
1934  
1935  And finally we'll test that we don't make Unicode strings if C<U> is B<not>
1936  the first active format:
1937  
1938   isnt( v1.20.300.4000, sprintf "%vd", pack("C0U*",1,20,300,4000),
1939                                         "U* not first isn't Unicode" );
1940  
1941  Mustn't forget to change the number of tests which appears at the top,
1942  or else the automated tester will get confused.  This will either look
1943  like this:
1944  
1945   print "1..156\n";
1946  
1947  or this:
1948  
1949   plan( tests => 156 );
1950  
1951  We now compile up Perl, and run it through the test suite. Our new
1952  tests pass, hooray!
1953  
1954  Finally, the documentation. The job is never done until the paperwork is
1955  over, so let's describe the change we've just made. The relevant place
1956  is F<pod/perlfunc.pod>; again, we make a copy, and then we'll insert
1957  this text in the description of C<pack>:
1958  
1959   =item *
1960  
1961   If the pattern begins with a C<U>, the resulting string will be treated
1962   as UTF-8-encoded Unicode. You can force UTF-8 encoding on in a string
1963   with an initial C<U0>, and the bytes that follow will be interpreted as
1964   Unicode characters. If you don't want this to happen, you can begin your
1965   pattern with C<C0> (or anything else) to force Perl not to UTF-8 encode your
1966   string, and then follow this with a C<U*> somewhere in your pattern.
1967  
1968  All done. Now let's create the patch. F<Porting/patching.pod> tells us
1969  that if we're making major changes, we should copy the entire directory
1970  to somewhere safe before we begin fiddling, and then do
1971  
1972      diff -ruN old new > patch
1973  
1974  However, we know which files we've changed, and we can simply do this:
1975  
1976      diff -u pp.c~             pp.c             >  patch
1977      diff -u t/op/pack.t~      t/op/pack.t      >> patch
1978      diff -u pod/perlfunc.pod~ pod/perlfunc.pod >> patch
1979  
1980  We end up with a patch looking a little like this:
1981  
1982      --- pp.c~       Fri Jun 02 04:34:10 2000
1983      +++ pp.c        Fri Jun 16 11:37:25 2000
1984      @@ -4375,6 +4375,7 @@
1985           register I32 items;
1986           STRLEN fromlen;
1987           register char *pat = SvPVx(*++MARK, fromlen);
1988      +    char *patcopy;
1989           register char *patend = pat + fromlen;
1990           register I32 len;
1991           I32 datumtype;
1992      @@ -4405,6 +4406,7 @@
1993      ...
1994  
1995  And finally, we submit it, with our rationale, to perl5-porters. Job
1996  done!
1997  
1998  =head2 Patching a core module
1999  
2000  This works just like patching anything else, with an extra
2001  consideration.  Many core modules also live on CPAN.  If this is so,
2002  patch the CPAN version instead of the core and send the patch off to
2003  the module maintainer (with a copy to p5p).  This will help the module
2004  maintainer keep the CPAN version in sync with the core version without
2005  constantly scanning p5p.
2006  
2007  The list of maintainers of core modules is usefully documented in
2008  F<Porting/Maintainers.pl>.
2009  
2010  =head2 Adding a new function to the core
2011  
2012  If, as part of a patch to fix a bug, or just because you have an
2013  especially good idea, you decide to add a new function to the core,
2014  discuss your ideas on p5p well before you start work.  It may be that
2015  someone else has already attempted to do what you are considering and
2016  can give lots of good advice or even provide you with bits of code
2017  that they already started (but never finished).
2018  
2019  You have to follow all of the advice given above for patching.  It is
2020  extremely important to test any addition thoroughly and add new tests
2021  to explore all boundary conditions that your new function is expected
2022  to handle.  If your new function is used only by one module (e.g. toke),
2023  then it should probably be named S_your_function (for static); on the
2024  other hand, if you expect it to accessible from other functions in
2025  Perl, you should name it Perl_your_function.  See L<perlguts/Internal Functions>
2026  for more details.
2027  
2028  The location of any new code is also an important consideration.  Don't
2029  just create a new top level .c file and put your code there; you would
2030  have to make changes to Configure (so the Makefile is created properly),
2031  as well as possibly lots of include files.  This is strictly pumpking
2032  business.
2033  
2034  It is better to add your function to one of the existing top level
2035  source code files, but your choice is complicated by the nature of
2036  the Perl distribution.  Only the files that are marked as compiled
2037  static are located in the perl executable.  Everything else is located
2038  in the shared library (or DLL if you are running under WIN32).  So,
2039  for example, if a function was only used by functions located in
2040  toke.c, then your code can go in toke.c.  If, however, you want to call
2041  the function from universal.c, then you should put your code in another
2042  location, for example util.c.
2043  
2044  In addition to writing your c-code, you will need to create an
2045  appropriate entry in embed.pl describing your function, then run
2046  'make regen_headers' to create the entries in the numerous header
2047  files that perl needs to compile correctly.  See L<perlguts/Internal Functions>
2048  for information on the various options that you can set in embed.pl.
2049  You will forget to do this a few (or many) times and you will get
2050  warnings during the compilation phase.  Make sure that you mention
2051  this when you post your patch to P5P; the pumpking needs to know this.
2052  
2053  When you write your new code, please be conscious of existing code
2054  conventions used in the perl source files.  See L<perlstyle> for
2055  details.  Although most of the guidelines discussed seem to focus on
2056  Perl code, rather than c, they all apply (except when they don't ;).
2057  See also I<Porting/patching.pod> file in the Perl source distribution
2058  for lots of details about both formatting and submitting patches of
2059  your changes.
2060  
2061  Lastly, TEST TEST TEST TEST TEST any code before posting to p5p.
2062  Test on as many platforms as you can find.  Test as many perl
2063  Configure options as you can (e.g. MULTIPLICITY).  If you have
2064  profiling or memory tools, see L<EXTERNAL TOOLS FOR DEBUGGING PERL>
2065  below for how to use them to further test your code.  Remember that
2066  most of the people on P5P are doing this on their own time and
2067  don't have the time to debug your code.
2068  
2069  =head2 Writing a test
2070  
2071  Every module and built-in function has an associated test file (or
2072  should...).  If you add or change functionality, you have to write a
2073  test.  If you fix a bug, you have to write a test so that bug never
2074  comes back.  If you alter the docs, it would be nice to test what the
2075  new documentation says.
2076  
2077  In short, if you submit a patch you probably also have to patch the
2078  tests.
2079  
2080  For modules, the test file is right next to the module itself.
2081  F<lib/strict.t> tests F<lib/strict.pm>.  This is a recent innovation,
2082  so there are some snags (and it would be wonderful for you to brush
2083  them out), but it basically works that way.  Everything else lives in
2084  F<t/>.
2085  
2086  =over 3
2087  
2088  =item F<t/base/>
2089  
2090  Testing of the absolute basic functionality of Perl.  Things like
2091  C<if>, basic file reads and writes, simple regexes, etc.  These are
2092  run first in the test suite and if any of them fail, something is
2093  I<really> broken.
2094  
2095  =item F<t/cmd/>
2096  
2097  These test the basic control structures, C<if/else>, C<while>,
2098  subroutines, etc.
2099  
2100  =item F<t/comp/>
2101  
2102  Tests basic issues of how Perl parses and compiles itself.
2103  
2104  =item F<t/io/>
2105  
2106  Tests for built-in IO functions, including command line arguments.
2107  
2108  =item F<t/lib/>
2109  
2110  The old home for the module tests, you shouldn't put anything new in
2111  here.  There are still some bits and pieces hanging around in here
2112  that need to be moved.  Perhaps you could move them?  Thanks!
2113  
2114  =item F<t/mro/>
2115  
2116  Tests for perl's method resolution order implementations
2117  (see L<mro>).
2118  
2119  =item F<t/op/>
2120  
2121  Tests for perl's built in functions that don't fit into any of the
2122  other directories.
2123  
2124  =item F<t/pod/>
2125  
2126  Tests for POD directives.  There are still some tests for the Pod
2127  modules hanging around in here that need to be moved out into F<lib/>.
2128  
2129  =item F<t/run/>
2130  
2131  Testing features of how perl actually runs, including exit codes and
2132  handling of PERL* environment variables.
2133  
2134  =item F<t/uni/>
2135  
2136  Tests for the core support of Unicode.
2137  
2138  =item F<t/win32/>
2139  
2140  Windows-specific tests.
2141  
2142  =item F<t/x2p>
2143  
2144  A test suite for the s2p converter.
2145  
2146  =back
2147  
2148  The core uses the same testing style as the rest of Perl, a simple
2149  "ok/not ok" run through Test::Harness, but there are a few special
2150  considerations.
2151  
2152  There are three ways to write a test in the core.  Test::More,
2153  t/test.pl and ad hoc C<print $test ? "ok 42\n" : "not ok 42\n">.  The
2154  decision of which to use depends on what part of the test suite you're
2155  working on.  This is a measure to prevent a high-level failure (such
2156  as Config.pm breaking) from causing basic functionality tests to fail.
2157  
2158  =over 4
2159  
2160  =item t/base t/comp
2161  
2162  Since we don't know if require works, or even subroutines, use ad hoc
2163  tests for these two.  Step carefully to avoid using the feature being
2164  tested.
2165  
2166  =item t/cmd t/run t/io t/op
2167  
2168  Now that basic require() and subroutines are tested, you can use the
2169  t/test.pl library which emulates the important features of Test::More
2170  while using a minimum of core features.
2171  
2172  You can also conditionally use certain libraries like Config, but be
2173  sure to skip the test gracefully if it's not there.
2174  
2175  =item t/lib ext lib
2176  
2177  Now that the core of Perl is tested, Test::More can be used.  You can
2178  also use the full suite of core modules in the tests.
2179  
2180  =back
2181  
2182  When you say "make test" Perl uses the F<t/TEST> program to run the
2183  test suite (except under Win32 where it uses F<t/harness> instead.)
2184  All tests are run from the F<t/> directory, B<not> the directory
2185  which contains the test.  This causes some problems with the tests
2186  in F<lib/>, so here's some opportunity for some patching.
2187  
2188  You must be triply conscious of cross-platform concerns.  This usually
2189  boils down to using File::Spec and avoiding things like C<fork()> and
2190  C<system()> unless absolutely necessary.
2191  
2192  =head2 Special Make Test Targets
2193  
2194  There are various special make targets that can be used to test Perl
2195  slightly differently than the standard "test" target.  Not all them
2196  are expected to give a 100% success rate.  Many of them have several
2197  aliases, and many of them are not available on certain operating
2198  systems.
2199  
2200  =over 4
2201  
2202  =item coretest
2203  
2204  Run F<perl> on all core tests (F<t/*> and F<lib/[a-z]*> pragma tests).
2205  
2206  (Not available on Win32)
2207  
2208  =item test.deparse
2209  
2210  Run all the tests through B::Deparse.  Not all tests will succeed.
2211  
2212  (Not available on Win32)
2213  
2214  =item test.taintwarn
2215  
2216  Run all tests with the B<-t> command-line switch.  Not all tests
2217  are expected to succeed (until they're specifically fixed, of course).
2218  
2219  (Not available on Win32)
2220  
2221  =item minitest
2222  
2223  Run F<miniperl> on F<t/base>, F<t/comp>, F<t/cmd>, F<t/run>, F<t/io>,
2224  F<t/op>, F<t/uni> and F<t/mro> tests.
2225  
2226  =item test.valgrind check.valgrind utest.valgrind ucheck.valgrind
2227  
2228  (Only in Linux) Run all the tests using the memory leak + naughty
2229  memory access tool "valgrind".  The log files will be named
2230  F<testname.valgrind>.
2231  
2232  =item test.third check.third utest.third ucheck.third
2233  
2234  (Only in Tru64)  Run all the tests using the memory leak + naughty
2235  memory access tool "Third Degree".  The log files will be named
2236  F<perl.3log.testname>.
2237  
2238  =item test.torture torturetest
2239  
2240  Run all the usual tests and some extra tests.  As of Perl 5.8.0 the
2241  only extra tests are Abigail's JAPHs, F<t/japh/abigail.t>.
2242  
2243  You can also run the torture test with F<t/harness> by giving
2244  C<-torture> argument to F<t/harness>.
2245  
2246  =item utest ucheck test.utf8 check.utf8
2247  
2248  Run all the tests with -Mutf8.  Not all tests will succeed.
2249  
2250  (Not available on Win32)
2251  
2252  =item minitest.utf16 test.utf16
2253  
2254  Runs the tests with UTF-16 encoded scripts, encoded with different
2255  versions of this encoding.
2256  
2257  C<make utest.utf16> runs the test suite with a combination of C<-utf8> and
2258  C<-utf16> arguments to F<t/TEST>.
2259  
2260  (Not available on Win32)
2261  
2262  =item test_harness
2263  
2264  Run the test suite with the F<t/harness> controlling program, instead of
2265  F<t/TEST>. F<t/harness> is more sophisticated, and uses the
2266  L<Test::Harness> module, thus using this test target supposes that perl
2267  mostly works. The main advantage for our purposes is that it prints a
2268  detailed summary of failed tests at the end. Also, unlike F<t/TEST>, it
2269  doesn't redirect stderr to stdout.
2270  
2271  Note that under Win32 F<t/harness> is always used instead of F<t/TEST>, so
2272  there is no special "test_harness" target.
2273  
2274  Under Win32's "test" target you may use the TEST_SWITCHES and TEST_FILES
2275  environment variables to control the behaviour of F<t/harness>.  This means
2276  you can say
2277  
2278      nmake test TEST_FILES="op/*.t"
2279      nmake test TEST_SWITCHES="-torture" TEST_FILES="op/*.t"
2280  
2281  =item test-notty test_notty
2282  
2283  Sets PERL_SKIP_TTY_TEST to true before running normal test.
2284  
2285  =back
2286  
2287  =head2 Running tests by hand
2288  
2289  You can run part of the test suite by hand by using one the following
2290  commands from the F<t/> directory :
2291  
2292      ./perl -I../lib TEST list-of-.t-files
2293  
2294  or
2295  
2296      ./perl -I../lib harness list-of-.t-files
2297  
2298  (if you don't specify test scripts, the whole test suite will be run.)
2299  
2300  =head3 Using t/harness for testing
2301  
2302  If you use C<harness> for testing you have several command line options
2303  available to you. The arguments are as follows, and are in the order
2304  that they must appear if used together.
2305  
2306      harness -v -torture -re=pattern LIST OF FILES TO TEST
2307      harness -v -torture -re LIST OF PATTERNS TO MATCH
2308  
2309  If C<LIST OF FILES TO TEST> is omitted the file list is obtained from
2310  the manifest. The file list may include shell wildcards which will be
2311  expanded out.
2312  
2313  =over 4
2314  
2315  =item -v
2316  
2317  Run the tests under verbose mode so you can see what tests were run,
2318  and debug outbut.
2319  
2320  =item -torture
2321  
2322  Run the torture tests as well as the normal set.
2323  
2324  =item -re=PATTERN
2325  
2326  Filter the file list so that all the test files run match PATTERN.
2327  Note that this form is distinct from the B<-re LIST OF PATTERNS> form below
2328  in that it allows the file list to be provided as well.
2329  
2330  =item -re LIST OF PATTERNS
2331  
2332  Filter the file list so that all the test files run match
2333  /(LIST|OF|PATTERNS)/. Note that with this form the patterns
2334  are joined by '|' and you cannot supply a list of files, instead
2335  the test files are obtained from the MANIFEST.
2336  
2337  =back
2338  
2339  You can run an individual test by a command similar to
2340  
2341      ./perl -I../lib patho/to/foo.t
2342  
2343  except that the harnesses set up some environment variables that may
2344  affect the execution of the test :
2345  
2346  =over 4
2347  
2348  =item PERL_CORE=1
2349  
2350  indicates that we're running this test part of the perl core test suite.
2351  This is useful for modules that have a dual life on CPAN.
2352  
2353  =item PERL_DESTRUCT_LEVEL=2
2354  
2355  is set to 2 if it isn't set already (see L</PERL_DESTRUCT_LEVEL>)
2356  
2357  =item PERL
2358  
2359  (used only by F<t/TEST>) if set, overrides the path to the perl executable
2360  that should be used to run the tests (the default being F<./perl>).
2361  
2362  =item PERL_SKIP_TTY_TEST
2363  
2364  if set, tells to skip the tests that need a terminal. It's actually set
2365  automatically by the Makefile, but can also be forced artificially by
2366  running 'make test_notty'.
2367  
2368  =back
2369  
2370  =head3 Other environment variables that may influence tests
2371  
2372  =over 4
2373  
2374  =item PERL_TEST_Net_Ping
2375  
2376  Setting this variable runs all the Net::Ping modules tests,
2377  otherwise some tests that interact with the outside world are skipped.
2378  See L<perl58delta>.
2379  
2380  =item PERL_TEST_NOVREXX
2381  
2382  Setting this variable skips the vrexx.t tests for OS2::REXX.
2383  
2384  =item PERL_TEST_NUMCONVERTS
2385  
2386  This sets a variable in op/numconvert.t.
2387  
2388  =back
2389  
2390  See also the documentation for the Test and Test::Harness modules,
2391  for more environment variables that affect testing.
2392  
2393  =head2 Common problems when patching Perl source code
2394  
2395  Perl source plays by ANSI C89 rules: no C99 (or C++) extensions.  In
2396  some cases we have to take pre-ANSI requirements into consideration.
2397  You don't care about some particular platform having broken Perl?
2398  I hear there is still a strong demand for J2EE programmers.
2399  
2400  =head2 Perl environment problems
2401  
2402  =over 4
2403  
2404  =item *
2405  
2406  Not compiling with threading
2407  
2408  Compiling with threading (-Duseithreads) completely rewrites
2409  the function prototypes of Perl.  You better try your changes
2410  with that.  Related to this is the difference between "Perl_-less"
2411  and "Perl_-ly" APIs, for example:
2412  
2413    Perl_sv_setiv(aTHX_ ...);
2414    sv_setiv(...);
2415  
2416  The first one explicitly passes in the context, which is needed for e.g.
2417  threaded builds.  The second one does that implicitly; do not get them
2418  mixed.  If you are not passing in a aTHX_, you will need to do a dTHX
2419  (or a dVAR) as the first thing in the function.
2420  
2421  See L<perlguts/"How multiple interpreters and concurrency are supported">
2422  for further discussion about context.
2423  
2424  =item *
2425  
2426  Not compiling with -DDEBUGGING
2427  
2428  The DEBUGGING define exposes more code to the compiler,
2429  therefore more ways for things to go wrong.  You should try it.
2430  
2431  =item *
2432  
2433  Introducing (non-read-only) globals
2434  
2435  Do not introduce any modifiable globals, truly global or file static.
2436  They are bad form and complicate multithreading and other forms of
2437  concurrency.  The right way is to introduce them as new interpreter
2438  variables, see F<intrpvar.h> (at the very end for binary compatibility).
2439  
2440  Introducing read-only (const) globals is okay, as long as you verify
2441  with e.g. C<nm libperl.a|egrep -v ' [TURtr] '> (if your C<nm> has
2442  BSD-style output) that the data you added really is read-only.
2443  (If it is, it shouldn't show up in the output of that command.)
2444  
2445  If you want to have static strings, make them constant:
2446  
2447    static const char etc[] = "...";
2448  
2449  If you want to have arrays of constant strings, note carefully
2450  the right combination of C<const>s:
2451  
2452      static const char * const yippee[] =
2453      {"hi", "ho", "silver"};
2454  
2455  There is a way to completely hide any modifiable globals (they are all
2456  moved to heap), the compilation setting C<-DPERL_GLOBAL_STRUCT_PRIVATE>.
2457  It is not normally used, but can be used for testing, read more
2458  about it in L<perlguts/"Background and PERL_IMPLICIT_CONTEXT">.
2459  
2460  =item *
2461  
2462  Not exporting your new function
2463  
2464  Some platforms (Win32, AIX, VMS, OS/2, to name a few) require any
2465  function that is part of the public API (the shared Perl library)
2466  to be explicitly marked as exported.  See the discussion about
2467  F<embed.pl> in L<perlguts>.
2468  
2469  =item *
2470  
2471  Exporting your new function
2472  
2473  The new shiny result of either genuine new functionality or your
2474  arduous refactoring is now ready and correctly exported.  So what
2475  could possibly go wrong?
2476  
2477  Maybe simply that your function did not need to be exported in the
2478  first place.  Perl has a long and not so glorious history of exporting
2479  functions that it should not have.
2480  
2481  If the function is used only inside one source code file, make it
2482  static.  See the discussion about F<embed.pl> in L<perlguts>.
2483  
2484  If the function is used across several files, but intended only for
2485  Perl's internal use (and this should be the common case), do not
2486  export it to the public API.  See the discussion about F<embed.pl>
2487  in L<perlguts>.
2488  
2489  =back
2490  
2491  =head2 Portability problems
2492  
2493  The following are common causes of compilation and/or execution
2494  failures, not common to Perl as such.  The C FAQ is good bedtime
2495  reading.  Please test your changes with as many C compilers and
2496  platforms as possible -- we will, anyway, and it's nice to save
2497  oneself from public embarrassment.
2498  
2499  If using gcc, you can add the C<-std=c89> option which will hopefully
2500  catch most of these unportabilities. (However it might also catch
2501  incompatibilities in your system's header files.)
2502  
2503  Use the Configure C<-Dgccansipedantic> flag to enable the gcc
2504  C<-ansi -pedantic> flags which enforce stricter ANSI rules.
2505  
2506  If using the C<gcc -Wall> note that not all the possible warnings
2507  (like C<-Wunitialized>) are given unless you also compile with C<-O>.
2508  
2509  Note that if using gcc, starting from Perl 5.9.5 the Perl core source
2510  code files (the ones at the top level of the source code distribution,
2511  but not e.g. the extensions under ext/) are automatically compiled
2512  with as many as possible of the C<-std=c89>, C<-ansi>, C<-pedantic>,
2513  and a selection of C<-W> flags (see cflags.SH).
2514  
2515  Also study L<perlport> carefully to avoid any bad assumptions
2516  about the operating system, filesystems, and so forth.
2517  
2518  You may once in a while try a "make microperl" to see whether we
2519  can still compile Perl with just the bare minimum of interfaces.
2520  (See README.micro.)
2521  
2522  Do not assume an operating system indicates a certain compiler.
2523  
2524  =over 4
2525  
2526  =item *
2527  
2528  Casting pointers to integers or casting integers to pointers
2529  
2530      void castaway(U8* p)
2531      {
2532        IV i = p;
2533  
2534  or
2535  
2536      void castaway(U8* p)
2537      {
2538        IV i = (IV)p;
2539  
2540  Both are bad, and broken, and unportable.  Use the PTR2IV()
2541  macro that does it right.  (Likewise, there are PTR2UV(), PTR2NV(),
2542  INT2PTR(), and NUM2PTR().)
2543  
2544  =item *
2545  
2546  Casting between data function pointers and data pointers
2547  
2548  Technically speaking casting between function pointers and data
2549  pointers is unportable and undefined, but practically speaking
2550  it seems to work, but you should use the FPTR2DPTR() and DPTR2FPTR()
2551  macros.  Sometimes you can also play games with unions.
2552  
2553  =item *
2554  
2555  Assuming sizeof(int) == sizeof(long)
2556  
2557  There are platforms where longs are 64 bits, and platforms where ints
2558  are 64 bits, and while we are out to shock you, even platforms where
2559  shorts are 64 bits.  This is all legal according to the C standard.
2560  (In other words, "long long" is not a portable way to specify 64 bits,
2561  and "long long" is not even guaranteed to be any wider than "long".)
2562  
2563  Instead, use the definitions IV, UV, IVSIZE, I32SIZE, and so forth.
2564  Avoid things like I32 because they are B<not> guaranteed to be
2565  I<exactly> 32 bits, they are I<at least> 32 bits, nor are they
2566  guaranteed to be B<int> or B<long>.  If you really explicitly need
2567  64-bit variables, use I64 and U64, but only if guarded by HAS_QUAD.
2568  
2569  =item *
2570  
2571  Assuming one can dereference any type of pointer for any type of data
2572  
2573    char *p = ...;
2574    long pony = *p;    /* BAD */
2575  
2576  Many platforms, quite rightly so, will give you a core dump instead
2577  of a pony if the p happens not be correctly aligned.
2578  
2579  =item *
2580  
2581  Lvalue casts
2582  
2583    (int)*p = ...;    /* BAD */
2584  
2585  Simply not portable.  Get your lvalue to be of the right type,
2586  or maybe use temporary variables, or dirty tricks with unions.
2587  
2588  =item *
2589  
2590  Assume B<anything> about structs (especially the ones you
2591  don't control, like the ones coming from the system headers)
2592  
2593  =over 8
2594  
2595  =item *
2596  
2597  That a certain field exists in a struct
2598  
2599  =item *
2600  
2601  That no other fields exist besides the ones you know of
2602  
2603  =item *
2604  
2605  That a field is of certain signedness, sizeof, or type
2606  
2607  =item *
2608  
2609  That the fields are in a certain order
2610  
2611  =over 8
2612  
2613  =item *
2614  
2615  While C guarantees the ordering specified in the struct definition,
2616  between different platforms the definitions might differ
2617  
2618  =back
2619  
2620  =item *
2621  
2622  That the sizeof(struct) or the alignments are the same everywhere
2623  
2624  =over 8
2625  
2626  =item *
2627  
2628  There might be padding bytes between the fields to align the fields -
2629  the bytes can be anything
2630  
2631  =item *
2632  
2633  Structs are required to be aligned to the maximum alignment required
2634  by the fields - which for native types is for usually equivalent to
2635  sizeof() of the field
2636  
2637  =back
2638  
2639  =back
2640  
2641  =item *
2642  
2643  Mixing #define and #ifdef
2644  
2645    #define BURGLE(x) ... \
2646    #ifdef BURGLE_OLD_STYLE        /* BAD */
2647    ... do it the old way ... \
2648    #else
2649    ... do it the new way ... \
2650    #endif
2651  
2652  You cannot portably "stack" cpp directives.  For example in the above
2653  you need two separate BURGLE() #defines, one for each #ifdef branch.
2654  
2655  =item *
2656  
2657  Adding stuff after #endif or #else
2658  
2659    #ifdef SNOSH
2660    ...
2661    #else !SNOSH    /* BAD */
2662    ...
2663    #endif SNOSH    /* BAD */
2664  
2665  The #endif and #else cannot portably have anything non-comment after
2666  them.  If you want to document what is going (which is a good idea
2667  especially if the branches are long), use (C) comments:
2668  
2669    #ifdef SNOSH
2670    ...
2671    #else /* !SNOSH */
2672    ...
2673    #endif /* SNOSH */
2674  
2675  The gcc option C<-Wendif-labels> warns about the bad variant
2676  (by default on starting from Perl 5.9.4).
2677  
2678  =item *
2679  
2680  Having a comma after the last element of an enum list
2681  
2682    enum color {
2683      CERULEAN,
2684      CHARTREUSE,
2685      CINNABAR,     /* BAD */
2686    };
2687  
2688  is not portable.  Leave out the last comma.
2689  
2690  Also note that whether enums are implicitly morphable to ints
2691  varies between compilers, you might need to (int).
2692  
2693  =item *
2694  
2695  Using //-comments
2696  
2697    // This function bamfoodles the zorklator.    /* BAD */
2698  
2699  That is C99 or C++.  Perl is C89.  Using the //-comments is silently
2700  allowed by many C compilers but cranking up the ANSI C89 strictness
2701  (which we like to do) causes the compilation to fail.
2702  
2703  =item *
2704  
2705  Mixing declarations and code
2706  
2707    void zorklator()
2708    {
2709      int n = 3;
2710      set_zorkmids(n);    /* BAD */
2711      int q = 4;
2712  
2713  That is C99 or C++.  Some C compilers allow that, but you shouldn't.
2714  
2715  The gcc option C<-Wdeclaration-after-statements> scans for such problems
2716  (by default on starting from Perl 5.9.4).
2717  
2718  =item *
2719  
2720  Introducing variables inside for()
2721  
2722    for(int i = ...; ...; ...) {    /* BAD */
2723  
2724  That is C99 or C++.  While it would indeed be awfully nice to have that
2725  also in C89, to limit the scope of the loop variable, alas, we cannot.
2726  
2727  =item *
2728  
2729  Mixing signed char pointers with unsigned char pointers
2730  
2731    int foo(char *s) { ... }
2732    ...
2733    unsigned char *t = ...; /* Or U8* t = ... */
2734    foo(t);   /* BAD */
2735  
2736  While this is legal practice, it is certainly dubious, and downright
2737  fatal in at least one platform: for example VMS cc considers this a
2738  fatal error.  One cause for people often making this mistake is that a
2739  "naked char" and therefore dereferencing a "naked char pointer" have
2740  an undefined signedness: it depends on the compiler and the flags of
2741  the compiler and the underlying platform whether the result is signed
2742  or unsigned.  For this very same reason using a 'char' as an array
2743  index is bad.
2744  
2745  =item *
2746  
2747  Macros that have string constants and their arguments as substrings of
2748  the string constants
2749  
2750    #define FOO(n) printf("number = %d\n", n)    /* BAD */
2751    FOO(10);
2752  
2753  Pre-ANSI semantics for that was equivalent to
2754  
2755    printf("10umber = %d\10");
2756  
2757  which is probably not what you were expecting.  Unfortunately at least
2758  one reasonably common and modern C compiler does "real backward
2759  compatibility" here, in AIX that is what still happens even though the
2760  rest of the AIX compiler is very happily C89.
2761  
2762  =item *
2763  
2764  Using printf formats for non-basic C types
2765  
2766     IV i = ...;
2767     printf("i = %d\n", i);    /* BAD */
2768  
2769  While this might by accident work in some platform (where IV happens
2770  to be an C<int>), in general it cannot.  IV might be something larger.
2771  Even worse the situation is with more specific types (defined by Perl's
2772  configuration step in F<config.h>):
2773  
2774     Uid_t who = ...;
2775     printf("who = %d\n", who);    /* BAD */
2776  
2777  The problem here is that Uid_t might be not only not C<int>-wide
2778  but it might also be unsigned, in which case large uids would be
2779  printed as negative values.
2780  
2781  There is no simple solution to this because of printf()'s limited
2782  intelligence, but for many types the right format is available as
2783  with either 'f' or '_f' suffix, for example:
2784  
2785     IVdf /* IV in decimal */
2786     UVxf /* UV is hexadecimal */
2787  
2788     printf("i = %"IVdf"\n", i); /* The IVdf is a string constant. */
2789  
2790     Uid_t_f /* Uid_t in decimal */
2791  
2792     printf("who = %"Uid_t_f"\n", who);
2793  
2794  Or you can try casting to a "wide enough" type:
2795  
2796     printf("i = %"IVdf"\n", (IV)something_very_small_and_signed);
2797  
2798  Also remember that the C<%p> format really does require a void pointer:
2799  
2800     U8* p = ...;
2801     printf("p = %p\n", (void*)p);
2802  
2803  The gcc option C<-Wformat> scans for such problems.
2804  
2805  =item *
2806  
2807  Blindly using variadic macros
2808  
2809  gcc has had them for a while with its own syntax, and C99 brought
2810  them with a standardized syntax.  Don't use the former, and use
2811  the latter only if the HAS_C99_VARIADIC_MACROS is defined.
2812  
2813  =item *
2814  
2815  Blindly passing va_list
2816  
2817  Not all platforms support passing va_list to further varargs (stdarg)
2818  functions.  The right thing to do is to copy the va_list using the
2819  Perl_va_copy() if the NEED_VA_COPY is defined.
2820  
2821  =item *
2822  
2823  Using gcc statement expressions
2824  
2825     val = ({...;...;...});    /* BAD */
2826  
2827  While a nice extension, it's not portable.  The Perl code does
2828  admittedly use them if available to gain some extra speed
2829  (essentially as a funky form of inlining), but you shouldn't.
2830  
2831  =item *
2832  
2833  Binding together several statements
2834  
2835  Use the macros STMT_START and STMT_END.
2836  
2837     STMT_START {
2838        ...
2839     } STMT_END
2840  
2841  =item *
2842  
2843  Testing for operating systems or versions when should be testing for features
2844  
2845    #ifdef __FOONIX__    /* BAD */
2846    foo = quux();
2847    #endif
2848  
2849  Unless you know with 100% certainty that quux() is only ever available
2850  for the "Foonix" operating system B<and> that is available B<and>
2851  correctly working for B<all> past, present, B<and> future versions of
2852  "Foonix", the above is very wrong.  This is more correct (though still
2853  not perfect, because the below is a compile-time check):
2854  
2855    #ifdef HAS_QUUX
2856    foo = quux();
2857    #endif
2858  
2859  How does the HAS_QUUX become defined where it needs to be?  Well, if
2860  Foonix happens to be UNIXy enough to be able to run the Configure
2861  script, and Configure has been taught about detecting and testing
2862  quux(), the HAS_QUUX will be correctly defined.  In other platforms,
2863  the corresponding configuration step will hopefully do the same.
2864  
2865  In a pinch, if you cannot wait for Configure to be educated,
2866  or if you have a good hunch of where quux() might be available,
2867  you can temporarily try the following:
2868  
2869    #if (defined(__FOONIX__) || defined(__BARNIX__))
2870    # define HAS_QUUX
2871    #endif
2872  
2873    ...
2874  
2875    #ifdef HAS_QUUX
2876    foo = quux();
2877    #endif
2878  
2879  But in any case, try to keep the features and operating systems separate.
2880  
2881  =back
2882  
2883  =head2 Problematic System Interfaces
2884  
2885  =over 4
2886  
2887  =item *
2888  
2889  malloc(0), realloc(0), calloc(0, 0) are non-portable.  To be portable
2890  allocate at least one byte.  (In general you should rarely need to
2891  work at this low level, but instead use the various malloc wrappers.)
2892  
2893  =item *
2894  
2895  snprintf() - the return type is unportable.  Use my_snprintf() instead.
2896  
2897  =back
2898  
2899  =head2 Security problems
2900  
2901  Last but not least, here are various tips for safer coding.
2902  
2903  =over 4
2904  
2905  =item *
2906  
2907  Do not use gets()
2908  
2909  Or we will publicly ridicule you.  Seriously.
2910  
2911  =item *
2912  
2913  Do not use strcpy() or strcat() or strncpy() or strncat()
2914  
2915  Use my_strlcpy() and my_strlcat() instead: they either use the native
2916  implementation, or Perl's own implementation (borrowed from the public
2917  domain implementation of INN).
2918  
2919  =item *
2920  
2921  Do not use sprintf() or vsprintf()
2922  
2923  If you really want just plain byte strings, use my_snprintf()
2924  and my_vsnprintf() instead, which will try to use snprintf() and
2925  vsnprintf() if those safer APIs are available.  If you want something
2926  fancier than a plain byte string, use SVs and Perl_sv_catpvf().
2927  
2928  =back
2929  
2930  =head1 EXTERNAL TOOLS FOR DEBUGGING PERL
2931  
2932  Sometimes it helps to use external tools while debugging and
2933  testing Perl.  This section tries to guide you through using
2934  some common testing and debugging tools with Perl.  This is
2935  meant as a guide to interfacing these tools with Perl, not
2936  as any kind of guide to the use of the tools themselves.
2937  
2938  B<NOTE 1>: Running under memory debuggers such as Purify, valgrind, or
2939  Third Degree greatly slows down the execution: seconds become minutes,
2940  minutes become hours.  For example as of Perl 5.8.1, the
2941  ext/Encode/t/Unicode.t takes extraordinarily long to complete under
2942  e.g. Purify, Third Degree, and valgrind.  Under valgrind it takes more
2943  than six hours, even on a snappy computer-- the said test must be
2944  doing something that is quite unfriendly for memory debuggers.  If you
2945  don't feel like waiting, that you can simply kill away the perl
2946  process.
2947  
2948  B<NOTE 2>: To minimize the number of memory leak false alarms (see
2949  L</PERL_DESTRUCT_LEVEL> for more information), you have to have
2950  environment variable PERL_DESTRUCT_LEVEL set to 2.  The F<TEST>
2951  and harness scripts do that automatically.  But if you are running
2952  some of the tests manually-- for csh-like shells:
2953  
2954      setenv PERL_DESTRUCT_LEVEL 2
2955  
2956  and for Bourne-type shells:
2957  
2958      PERL_DESTRUCT_LEVEL=2
2959      export PERL_DESTRUCT_LEVEL
2960  
2961  or in UNIXy environments you can also use the C<env> command:
2962  
2963      env PERL_DESTRUCT_LEVEL=2 valgrind ./perl -Ilib ...
2964  
2965  B<NOTE 3>: There are known memory leaks when there are compile-time
2966  errors within eval or require, seeing C<S_doeval> in the call stack
2967  is a good sign of these.  Fixing these leaks is non-trivial,
2968  unfortunately, but they must be fixed eventually.
2969  
2970  B<NOTE 4>: L<DynaLoader> will not clean up after itself completely
2971  unless Perl is built with the Configure option
2972  C<-Accflags=-DDL_UNLOAD_ALL_AT_EXIT>.
2973  
2974  =head2 Rational Software's Purify
2975  
2976  Purify is a commercial tool that is helpful in identifying
2977  memory overruns, wild pointers, memory leaks and other such
2978  badness.  Perl must be compiled in a specific way for
2979  optimal testing with Purify.  Purify is available under
2980  Windows NT, Solaris, HP-UX, SGI, and Siemens Unix.
2981  
2982  =head2 Purify on Unix
2983  
2984  On Unix, Purify creates a new Perl binary.  To get the most
2985  benefit out of Purify, you should create the perl to Purify
2986  using:
2987  
2988      sh Configure -Accflags=-DPURIFY -Doptimize='-g' \
2989       -Uusemymalloc -Dusemultiplicity
2990  
2991  where these arguments mean:
2992  
2993  =over 4
2994  
2995  =item -Accflags=-DPURIFY
2996  
2997  Disables Perl's arena memory allocation functions, as well as
2998  forcing use of memory allocation functions derived from the
2999  system malloc.
3000  
3001  =item -Doptimize='-g'
3002  
3003  Adds debugging information so that you see the exact source
3004  statements where the problem occurs.  Without this flag, all
3005  you will see is the source filename of where the error occurred.
3006  
3007  =item -Uusemymalloc
3008  
3009  Disable Perl's malloc so that Purify can more closely monitor
3010  allocations and leaks.  Using Perl's malloc will make Purify
3011  report most leaks in the "potential" leaks category.
3012  
3013  =item -Dusemultiplicity
3014  
3015  Enabling the multiplicity option allows perl to clean up
3016  thoroughly when the interpreter shuts down, which reduces the
3017  number of bogus leak reports from Purify.
3018  
3019  =back
3020  
3021  Once you've compiled a perl suitable for Purify'ing, then you
3022  can just:
3023  
3024      make pureperl
3025  
3026  which creates a binary named 'pureperl' that has been Purify'ed.
3027  This binary is used in place of the standard 'perl' binary
3028  when you want to debug Perl memory problems.
3029  
3030  As an example, to show any memory leaks produced during the
3031  standard Perl testset you would create and run the Purify'ed
3032  perl as:
3033  
3034      make pureperl
3035      cd t
3036      ../pureperl -I../lib harness
3037  
3038  which would run Perl on test.pl and report any memory problems.
3039  
3040  Purify outputs messages in "Viewer" windows by default.  If
3041  you don't have a windowing environment or if you simply
3042  want the Purify output to unobtrusively go to a log file
3043  instead of to the interactive window, use these following
3044  options to output to the log file "perl.log":
3045  
3046      setenv PURIFYOPTIONS "-chain-length=25 -windows=no \
3047       -log-file=perl.log -append-logfile=yes"
3048  
3049  If you plan to use the "Viewer" windows, then you only need this option:
3050  
3051      setenv PURIFYOPTIONS "-chain-length=25"
3052  
3053  In Bourne-type shells:
3054  
3055      PURIFYOPTIONS="..."
3056      export PURIFYOPTIONS
3057  
3058  or if you have the "env" utility:
3059  
3060      env PURIFYOPTIONS="..." ../pureperl ...
3061  
3062  =head2 Purify on NT
3063  
3064  Purify on Windows NT instruments the Perl binary 'perl.exe'
3065  on the fly.  There are several options in the makefile you
3066  should change to get the most use out of Purify:
3067  
3068  =over 4
3069  
3070  =item DEFINES
3071  
3072  You should add -DPURIFY to the DEFINES line so the DEFINES
3073  line looks something like:
3074  
3075      DEFINES = -DWIN32 -D_CONSOLE -DNO_STRICT $(CRYPT_FLAG) -DPURIFY=1
3076  
3077  to disable Perl's arena memory allocation functions, as
3078  well as to force use of memory allocation functions derived
3079  from the system malloc.
3080  
3081  =item USE_MULTI = define
3082  
3083  Enabling the multiplicity option allows perl to clean up
3084  thoroughly when the interpreter shuts down, which reduces the
3085  number of bogus leak reports from Purify.
3086  
3087  =item #PERL_MALLOC = define
3088  
3089  Disable Perl's malloc so that Purify can more closely monitor
3090  allocations and leaks.  Using Perl's malloc will make Purify
3091  report most leaks in the "potential" leaks category.
3092  
3093  =item CFG = Debug
3094  
3095  Adds debugging information so that you see the exact source
3096  statements where the problem occurs.  Without this flag, all
3097  you will see is the source filename of where the error occurred.
3098  
3099  =back
3100  
3101  As an example, to show any memory leaks produced during the
3102  standard Perl testset you would create and run Purify as:
3103  
3104      cd win32
3105      make
3106      cd ../t
3107      purify ../perl -I../lib harness
3108  
3109  which would instrument Perl in memory, run Perl on test.pl,
3110  then finally report any memory problems.
3111  
3112  =head2 valgrind
3113  
3114  The excellent valgrind tool can be used to find out both memory leaks
3115  and illegal memory accesses.  As of August 2003 it unfortunately works
3116  only on x86 (ELF) Linux.  The special "test.valgrind" target can be used
3117  to run the tests under valgrind.  Found errors and memory leaks are
3118  logged in files named F<testfile.valgrind>.
3119  
3120  Valgrind also provides a cachegrind tool, invoked on perl as:
3121  
3122      VG_OPTS=--tool=cachegrind make test.valgrind
3123  
3124  As system libraries (most notably glibc) are also triggering errors,
3125  valgrind allows to suppress such errors using suppression files. The
3126  default suppression file that comes with valgrind already catches a lot
3127  of them. Some additional suppressions are defined in F<t/perl.supp>.
3128  
3129  To get valgrind and for more information see
3130  
3131      http://developer.kde.org/~sewardj/
3132  
3133  =head2 Compaq's/Digital's/HP's Third Degree
3134  
3135  Third Degree is a tool for memory leak detection and memory access checks.
3136  It is one of the many tools in the ATOM toolkit.  The toolkit is only
3137  available on Tru64 (formerly known as Digital UNIX formerly known as
3138  DEC OSF/1).
3139  
3140  When building Perl, you must first run Configure with -Doptimize=-g
3141  and -Uusemymalloc flags, after that you can use the make targets
3142  "perl.third" and "test.third".  (What is required is that Perl must be
3143  compiled using the C<-g> flag, you may need to re-Configure.)
3144  
3145  The short story is that with "atom" you can instrument the Perl
3146  executable to create a new executable called F<perl.third>.  When the
3147  instrumented executable is run, it creates a log of dubious memory
3148  traffic in file called F<perl.3log>.  See the manual pages of atom and
3149  third for more information.  The most extensive Third Degree
3150  documentation is available in the Compaq "Tru64 UNIX Programmer's
3151  Guide", chapter "Debugging Programs with Third Degree".
3152  
3153  The "test.third" leaves a lot of files named F<foo_bar.3log> in the t/
3154  subdirectory.  There is a problem with these files: Third Degree is so
3155  effective that it finds problems also in the system libraries.
3156  Therefore you should used the Porting/thirdclean script to cleanup
3157  the F<*.3log> files.
3158  
3159  There are also leaks that for given certain definition of a leak,
3160  aren't.  See L</PERL_DESTRUCT_LEVEL> for more information.
3161  
3162  =head2 PERL_DESTRUCT_LEVEL
3163  
3164  If you want to run any of the tests yourself manually using e.g.
3165  valgrind, or the pureperl or perl.third executables, please note that
3166  by default perl B<does not> explicitly cleanup all the memory it has
3167  allocated (such as global memory arenas) but instead lets the exit()
3168  of the whole program "take care" of such allocations, also known as
3169  "global destruction of objects".
3170  
3171  There is a way to tell perl to do complete cleanup: set the
3172  environment variable PERL_DESTRUCT_LEVEL to a non-zero value.
3173  The t/TEST wrapper does set this to 2, and this is what you
3174  need to do too, if you don't want to see the "global leaks":
3175  For example, for "third-degreed" Perl:
3176  
3177      env PERL_DESTRUCT_LEVEL=2 ./perl.third -Ilib t/foo/bar.t
3178  
3179  (Note: the mod_perl apache module uses also this environment variable
3180  for its own purposes and extended its semantics. Refer to the mod_perl
3181  documentation for more information. Also, spawned threads do the
3182  equivalent of setting this variable to the value 1.)
3183  
3184  If, at the end of a run you get the message I<N scalars leaked>, you can
3185  recompile with C<-DDEBUG_LEAKING_SCALARS>, which will cause the addresses
3186  of all those leaked SVs to be dumped along with details as to where each
3187  SV was originally allocated. This information is also displayed by
3188  Devel::Peek. Note that the extra details recorded with each SV increases
3189  memory usage, so it shouldn't be used in production environments. It also
3190  converts C<new_SV()> from a macro into a real function, so you can use
3191  your favourite debugger to discover where those pesky SVs were allocated.
3192  
3193  =head2 PERL_MEM_LOG
3194  
3195  If compiled with C<-DPERL_MEM_LOG>, all Newx() and Renew() allocations
3196  and Safefree() in the Perl core go through logging functions, which is
3197  handy for breakpoint setting.  If also compiled with C<-DPERL_MEM_LOG_STDERR>,
3198  the allocations and frees are logged to STDERR (or more precisely, to the
3199  file descriptor 2) in these logging functions, with the calling source code
3200  file and line number (and C function name, if supported by the C compiler).
3201  
3202  This logging is somewhat similar to C<-Dm> but independent of C<-DDEBUGGING>,
3203  and at a higher level (the C<-Dm> is directly at the point of C<malloc()>,
3204  while the C<PERL_MEM_LOG> is at the level of C<New()>).
3205  
3206  =head2 Profiling
3207  
3208  Depending on your platform there are various of profiling Perl.
3209  
3210  There are two commonly used techniques of profiling executables:
3211  I<statistical time-sampling> and I<basic-block counting>.
3212  
3213  The first method takes periodically samples of the CPU program
3214  counter, and since the program counter can be correlated with the code
3215  generated for functions, we get a statistical view of in which
3216  functions the program is spending its time.  The caveats are that very
3217  small/fast functions have lower probability of showing up in the
3218  profile, and that periodically interrupting the program (this is
3219  usually done rather frequently, in the scale of milliseconds) imposes
3220  an additional overhead that may skew the results.  The first problem
3221  can be alleviated by running the code for longer (in general this is a
3222  good idea for profiling), the second problem is usually kept in guard
3223  by the profiling tools themselves.
3224  
3225  The second method divides up the generated code into I<basic blocks>.
3226  Basic blocks are sections of code that are entered only in the
3227  beginning and exited only at the end.  For example, a conditional jump
3228  starts a basic block.  Basic block profiling usually works by
3229  I<instrumenting> the code by adding I<enter basic block #nnnn>
3230  book-keeping code to the generated code.  During the execution of the
3231  code the basic block counters are then updated appropriately.  The
3232  caveat is that the added extra code can skew the results: again, the
3233  profiling tools usually try to factor their own effects out of the
3234  results.
3235  
3236  =head2 Gprof Profiling
3237  
3238  gprof is a profiling tool available in many UNIX platforms,
3239  it uses F<statistical time-sampling>.
3240  
3241  You can build a profiled version of perl called "perl.gprof" by
3242  invoking the make target "perl.gprof"  (What is required is that Perl
3243  must be compiled using the C<-pg> flag, you may need to re-Configure).
3244  Running the profiled version of Perl will create an output file called
3245  F<gmon.out> is created which contains the profiling data collected
3246  during the execution.
3247  
3248  The gprof tool can then display the collected data in various ways.
3249  Usually gprof understands the following options:
3250  
3251  =over 4
3252  
3253  =item -a
3254  
3255  Suppress statically defined functions from the profile.
3256  
3257  =item -b
3258  
3259  Suppress the verbose descriptions in the profile.
3260  
3261  =item -e routine
3262  
3263  Exclude the given routine and its descendants from the profile.
3264  
3265  =item -f routine
3266  
3267  Display only the given routine and its descendants in the profile.
3268  
3269  =item -s
3270  
3271  Generate a summary file called F<gmon.sum> which then may be given
3272  to subsequent gprof runs to accumulate data over several runs.
3273  
3274  =item -z
3275  
3276  Display routines that have zero usage.
3277  
3278  =back
3279  
3280  For more detailed explanation of the available commands and output
3281  formats, see your own local documentation of gprof.
3282  
3283  quick hint:
3284  
3285      $ sh Configure -des -Dusedevel -Doptimize='-g' -Accflags='-pg' -Aldflags='-pg' && make
3286      $ ./perl someprog # creates gmon.out in current directory
3287      $ gprof perl > out
3288      $ view out
3289  
3290  =head2 GCC gcov Profiling
3291  
3292  Starting from GCC 3.0 I<basic block profiling> is officially available
3293  for the GNU CC.
3294  
3295  You can build a profiled version of perl called F<perl.gcov> by
3296  invoking the make target "perl.gcov" (what is required that Perl must
3297  be compiled using gcc with the flags C<-fprofile-arcs
3298  -ftest-coverage>, you may need to re-Configure).
3299  
3300  Running the profiled version of Perl will cause profile output to be
3301  generated.  For each source file an accompanying ".da" file will be
3302  created.
3303  
3304  To display the results you use the "gcov" utility (which should
3305  be installed if you have gcc 3.0 or newer installed).  F<gcov> is
3306  run on source code files, like this
3307  
3308      gcov sv.c
3309  
3310  which will cause F<sv.c.gcov> to be created.  The F<.gcov> files
3311  contain the source code annotated with relative frequencies of
3312  execution indicated by "#" markers.
3313  
3314  Useful options of F<gcov> include C<-b> which will summarise the
3315  basic block, branch, and function call coverage, and C<-c> which
3316  instead of relative frequencies will use the actual counts.  For
3317  more information on the use of F<gcov> and basic block profiling
3318  with gcc, see the latest GNU CC manual, as of GCC 3.0 see
3319  
3320      http://gcc.gnu.org/onlinedocs/gcc-3.0/gcc.html
3321  
3322  and its section titled "8. gcov: a Test Coverage Program"
3323  
3324      http://gcc.gnu.org/onlinedocs/gcc-3.0/gcc_8.html#SEC132
3325  
3326  quick hint:
3327  
3328      $ sh Configure -des  -Doptimize='-g' -Accflags='-fprofile-arcs -ftest-coverage' \
3329          -Aldflags='-fprofile-arcs -ftest-coverage' && make perl.gcov
3330      $ rm -f regexec.c.gcov regexec.gcda
3331      $ ./perl.gcov
3332      $ gcov regexec.c
3333      $ view regexec.c.gcov
3334  
3335  =head2 Pixie Profiling
3336  
3337  Pixie is a profiling tool available on IRIX and Tru64 (aka Digital
3338  UNIX aka DEC OSF/1) platforms.  Pixie does its profiling using
3339  I<basic-block counting>.
3340  
3341  You can build a profiled version of perl called F<perl.pixie> by
3342  invoking the make target "perl.pixie" (what is required is that Perl
3343  must be compiled using the C<-g> flag, you may need to re-Configure).
3344  
3345  In Tru64 a file called F<perl.Addrs> will also be silently created,
3346  this file contains the addresses of the basic blocks.  Running the
3347  profiled version of Perl will create a new file called "perl.Counts"
3348  which contains the counts for the basic block for that particular
3349  program execution.
3350  
3351  To display the results you use the F<prof> utility.  The exact
3352  incantation depends on your operating system, "prof perl.Counts" in
3353  IRIX, and "prof -pixie -all -L. perl" in Tru64.
3354  
3355  In IRIX the following prof options are available:
3356  
3357  =over 4
3358  
3359  =item -h
3360  
3361  Reports the most heavily used lines in descending order of use.
3362  Useful for finding the hotspot lines.
3363  
3364  =item -l
3365  
3366  Groups lines by procedure, with procedures sorted in descending order of use.
3367  Within a procedure, lines are listed in source order.
3368  Useful for finding the hotspots of procedures.
3369  
3370  =back
3371  
3372  In Tru64 the following options are available:
3373  
3374  =over 4
3375  
3376  =item -p[rocedures]
3377  
3378  Procedures sorted in descending order by the number of cycles executed
3379  in each procedure.  Useful for finding the hotspot procedures.
3380  (This is the default option.)
3381  
3382  =item -h[eavy]
3383  
3384  Lines sorted in descending order by the number of cycles executed in
3385  each line.  Useful for finding the hotspot lines.
3386  
3387  =item -i[nvocations]
3388  
3389  The called procedures are sorted in descending order by number of calls
3390  made to the procedures.  Useful for finding the most used procedures.
3391  
3392  =item -l[ines]
3393  
3394  Grouped by procedure, sorted by cycles executed per procedure.
3395  Useful for finding the hotspots of procedures.
3396  
3397  =item -testcoverage
3398  
3399  The compiler emitted code for these lines, but the code was unexecuted.
3400  
3401  =item -z[ero]
3402  
3403  Unexecuted procedures.
3404  
3405  =back
3406  
3407  For further information, see your system's manual pages for pixie and prof.
3408  
3409  =head2 Miscellaneous tricks
3410  
3411  =over 4
3412  
3413  =item *
3414  
3415  Those debugging perl with the DDD frontend over gdb may find the
3416  following useful:
3417  
3418  You can extend the data conversion shortcuts menu, so for example you
3419  can display an SV's IV value with one click, without doing any typing.
3420  To do that simply edit ~/.ddd/init file and add after:
3421  
3422    ! Display shortcuts.
3423    Ddd*gdbDisplayShortcuts: \
3424    /t ()   // Convert to Bin\n\
3425    /d ()   // Convert to Dec\n\
3426    /x ()   // Convert to Hex\n\
3427    /o ()   // Convert to Oct(\n\
3428  
3429  the following two lines:
3430  
3431    ((XPV*) (())->sv_any )->xpv_pv  // 2pvx\n\
3432    ((XPVIV*) (())->sv_any )->xiv_iv // 2ivx
3433  
3434  so now you can do ivx and pvx lookups or you can plug there the
3435  sv_peek "conversion":
3436  
3437    Perl_sv_peek(my_perl, (SV*)()) // sv_peek
3438  
3439  (The my_perl is for threaded builds.)
3440  Just remember that every line, but the last one, should end with \n\
3441  
3442  Alternatively edit the init file interactively via:
3443  3rd mouse button -> New Display -> Edit Menu
3444  
3445  Note: you can define up to 20 conversion shortcuts in the gdb
3446  section.
3447  
3448  =item *
3449  
3450  If you see in a debugger a memory area mysteriously full of 0xABABABAB
3451  or 0xEFEFEFEF, you may be seeing the effect of the Poison() macros,
3452  see L<perlclib>.
3453  
3454  =item *
3455  
3456  Under ithreads the optree is read only. If you want to enforce this, to check
3457  for write accesses from buggy code, compile with C<-DPL_OP_SLAB_ALLOC> to
3458  enable the OP slab allocator and C<-DPERL_DEBUG_READONLY_OPS> to enable code
3459  that allocates op memory via C<mmap>, and sets it read-only at run time.
3460  Any write access to an op results in a C<SIGBUS> and abort.
3461  
3462  This code is intended for development only, and may not be portable even to
3463  all Unix variants. Also, it is an 80% solution, in that it isn't able to make
3464  all ops read only. Specifically it
3465  
3466  =over
3467  
3468  =item 1
3469  
3470  Only sets read-only on all slabs of ops at C<CHECK> time, hence ops allocated
3471  later via C<require> or C<eval> will be re-write
3472  
3473  =item 2
3474  
3475  Turns an entire slab of ops read-write if the refcount of any op in the slab
3476  needs to be decreased.
3477  
3478  =item 3
3479  
3480  Turns an entire slab of ops read-write if any op from the slab is freed.
3481  
3482  =back
3483  
3484  It's not possible to turn the slabs to read-only after an action requiring
3485  read-write access, as either can happen during op tree building time, so
3486  there may still be legitimate write access.
3487  
3488  However, as an 80% solution it is still effective, as currently it catches
3489  a write access during the generation of F<Config.pm>, which means that we
3490  can't yet build F<perl> with this enabled.
3491  
3492  =back
3493  
3494  
3495  =head1 CONCLUSION
3496  
3497  We've had a brief look around the Perl source, how to maintain quality
3498  of the source code, an overview of the stages F<perl> goes through
3499  when it's running your code, how to use debuggers to poke at the Perl
3500  guts, and finally how to analyse the execution of Perl. We took a very
3501  simple problem and demonstrated how to solve it fully - with
3502  documentation, regression tests, and finally a patch for submission to
3503  p5p.  Finally, we talked about how to use external tools to debug and
3504  test Perl.
3505  
3506  I'd now suggest you read over those references again, and then, as soon
3507  as possible, get your hands dirty. The best way to learn is by doing,
3508  so:
3509  
3510  =over 3
3511  
3512  =item *
3513  
3514  Subscribe to perl5-porters, follow the patches and try and understand
3515  them; don't be afraid to ask if there's a portion you're not clear on -
3516  who knows, you may unearth a bug in the patch...
3517  
3518  =item *
3519  
3520  Keep up to date with the bleeding edge Perl distributions and get
3521  familiar with the changes. Try and get an idea of what areas people are
3522  working on and the changes they're making.
3523  
3524  =item *
3525  
3526  Do read the README associated with your operating system, e.g. README.aix
3527  on the IBM AIX OS. Don't hesitate to supply patches to that README if
3528  you find anything missing or changed over a new OS release.
3529  
3530  =item *
3531  
3532  Find an area of Perl that seems interesting to you, and see if you can
3533  work out how it works. Scan through the source, and step over it in the
3534  debugger. Play, poke, investigate, fiddle! You'll probably get to
3535  understand not just your chosen area but a much wider range of F<perl>'s
3536  activity as well, and probably sooner than you'd think.
3537  
3538  =back
3539  
3540  =over 3
3541  
3542  =item I<The Road goes ever on and on, down from the door where it began.>
3543  
3544  =back
3545  
3546  If you can do these things, you've started on the long road to Perl porting.
3547  Thanks for wanting to help make Perl better - and happy hacking!
3548  
3549  =head1 AUTHOR
3550  
3551  This document was written by Nathan Torkington, and is maintained by
3552  the perl5-porters mailing list.


Generated: Tue Mar 17 22:47:18 2015 Cross-referenced by PHPXref 0.7.1