This section is provided for users that ``don't want to read the
manual.'' It provides a very brief overview, and allows a user to
rapidly perform profiling on an existing application.
To profile an application with a main entry point of "foo()", you
would add the following to your module:
import profile
profile.run('foo()')
The above action would cause "foo()" to be run, and a series of
informative lines (the profile) to be printed. The above approach is
most useful when working with the interpreter. If you would like to
save the results of a profile into a file for later examination, you
can supply a file name as the second argument to the run()
function:
import profile
profile.run('foo()', 'fooprof')
The file "profile.py" can also be invoked as
a script to profile another script. For example:
python /usr/local/lib/python1.5/profile.py myscript.py
When you wish to review the profile, you should use the methods in the
pstats module. Typically you would load the statistics data as
follows:
import pstats
p = pstats.Stats('fooprof')
The class Stats (the above code just created an instance of
this class) has a variety of methods for manipulating and printing the
data that was just read into "p". When you ran
profile.run() above, what was printed was the result of three
method calls:
p.strip_dirs().sort_stats(-1).print_stats()
The first method removed the extraneous path from all the module
names. The second method sorted all the entries according to the
standard module/line/name string that is printed (this is to comply
with the semantics of the old profiler). The third method printed out
all the statistics. You might try the following sort calls:
p.sort_stats('name')
p.print_stats()
The first call will actually sort the list by function name, and the
second call will print out the statistics. The following are some
interesting calls to experiment with:
p.sort_stats('cumulative').print_stats(10)
This sorts the profile by cumulative time in a function, and then only
prints the ten most significant lines. If you want to understand what
algorithms are taking time, the above line is what you would use.
If you were looking to see what functions were looping a lot, and
taking a lot of time, you would do:
p.sort_stats('time').print_stats(10)
to sort according to time spent within each function, and then print
the statistics for the top ten functions.
You might also try:
p.sort_stats('file').print_stats('__init__')
This will sort all the statistics by file name, and then print out
statistics for only the class init methods ('cause they are spelled
with "__init__" in them). As one final example, you could try:
p.sort_stats('time', 'cum').print_stats(.5, 'init')
This line sorts statistics with a primary key of time, and a secondary
key of cumulative time, and then prints out some of the statistics.
To be specific, the list is first culled down to 50% (re: ".5")
of its original size, then only lines containing init are
maintained, and that sub-sub-list is printed.
If you wondered what functions called the above functions, you could
now ("p" is still sorted according to the last criteria) do:
p.print_callers(.5, 'init')
and you would get a list of callers for each of the listed functions.
If you want more functionality, you're going to have to read the
manual, or guess what the following functions do:
p.print_callees()
p.add('fooprof')