Text processing - Python vs Perl performance [closed]


Text processing - Python vs Perl performance [closed]



Here is my Perl and Python script to do some simple text processing from about 21 log files, each about 300 KB to 1 MB (maximum) x 5 times repeated (total of 125 files, due to the log repeated 5 times).



Python Code (code modified to use compiled re and using re.I)


re


re.I


#!/usr/bin/python

import re
import fileinput

exists_re = re.compile(r'^(.*?) INFO.*Such a record already exists', re.I)
location_re = re.compile(r'^AwbLocation (.*?) insert into', re.I)

for line in fileinput.input():
fn = fileinput.filename()
currline = line.rstrip()

mprev = exists_re.search(currline)

if(mprev):
xlogtime = mprev.group(1)

mcurr = location_re.search(currline)

if(mcurr):
print fn, xlogtime, mcurr.group(1)



Perl Code


#!/usr/bin/perl

while (<>) {
chomp;

if (m/^(.*?) INFO.*Such a record already exists/i) {
$xlogtime = $1;
}

if (m/^AwbLocation (.*?) insert into/i) {
print "$ARGV $xlogtime $1n";
}
}



And, on my PC both code generates exactly the same result file of 10,790 lines. And, here is the timing done on Cygwin's Perl and Python implementations.


User@UserHP /cygdrive/d/tmp/Clipboard
# time /tmp/scripts/python/afs/process_file.py *log* *log* *log* *log* *log* >
summarypy.log

real 0m8.185s
user 0m8.018s
sys 0m0.092s

User@UserHP /cygdrive/d/tmp/Clipboard
# time /tmp/scripts/python/afs/process_file.pl *log* *log* *log* *log* *log* >
summarypl.log

real 0m1.481s
user 0m1.294s
sys 0m0.124s



Originally, it took 10.2 seconds using Python and only 1.9 secs using Perl for this simple text processing.



(UPDATE) but, after the compiled re version of Python, it now takes 8.2 seconds in Python and 1.5 seconds in Perl. Still Perl is much faster.


re



Is there a way to improve the speed of Python at all OR it is obvious that Perl will be the speedy one for simple text processing.



By the way this was not the only test I did for simple text processing... And, each different way I make the source code, always always Perl wins by a large margin. And, not once did Python performed better for simple m/regex/ match and print stuff.


m/regex/



Please do not suggest to use C, C++, Assembly, other flavours of
Python, etc.



I am looking for a solution using Standard Python with its built-in
modules compared against Standard Perl (not even using the modules).
Boy, I wish to use Python for all my tasks due to its readability, but
to give up speed, I don't think so.



So, please suggest how can the code be improved to have comparable
results with Perl.



UPDATE: 2012-10-18



As other users suggested, Perl has its place and Python has its.



So, for this question, one can safely conclude that for simple regex match on each line for hundreds or thousands of text files and writing the results to a file (or printing to screen), Perl will always, always WIN in performance for this job. It as simple as that.



Please note that when I say Perl wins in performance... only standard Perl and Python is compared... not resorting to some obscure modules (obscure for a normal user like me) and also not calling C, C++, assembly libraries from Python or Perl. We don't have time to learn all these extra steps and installation for a simple text matching job.



So, Perl rocks for text processing and regex.



Python has its place to rock in other places.



Update 2013-05-29: An excellent article that does similar comparison is here. Perl again wins for simple text matching... And for more details, read the article.



This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.





Are the patterns only compiled once in Python (as they are in Perl)?
– ikegami
Oct 9 '12 at 5:54






I wonder if the difference is in the time spent backtracking in lines that don't match.
– ikegami
Oct 9 '12 at 6:19





I'd run the Python code through a profiler to discover where its spending its time. You might also try using PCRE (Perl Compatible Regular Expressions) rather than the Python built in regexes (here's another implementation) and see if that does better.
– Schwern
Oct 9 '12 at 8:58






"Closed as too localized" seems too funny and subjective to me.
– pepr
Oct 10 '12 at 18:46





I've seen benchmarsk before that sugggest that Perl's regexp implementation is just that much faster than Pythons. Otherwise they should be of comparable speed.
– Leon Timmermans
Oct 14 '12 at 14:30




5 Answers
5



This is exactly the sort of stuff that Perl was designed to do, so it doesn't surprise me that it's faster.



One easy optimization in your Python code would be to precompile those regexes, so they aren't getting recompiled each time.


exists_re = re.compile(r'^(.*?) INFO.*Such a record already exists')
location_re = re.compile(r'^AwbLocation (.*?) insert into')



And then in your loop:


mprev = exists_re.search(currline)



and


mcurr = location_re.search(currline)



That by itself won't magically bring your Python script in line with your Perl script, but repeatedly calling re in a loop without compiling first is bad practice in Python.





re caches recently-used regexes, so this is probably not a huge issue.
– nneonneo
Oct 9 '12 at 6:05


re





@nneonneo I've heard that numerous times and I've seen the lines in the re source code which do the caching. But somehow I've never seen a benchmark that puts the two in the same order of magnitude, but several benchmarks (including a quick and dirty one I did a second ago) which put the pre-compiling option at several times faster.
– user395760
Oct 9 '12 at 6:15


re





Interesting. Well, it's definitely good practice to precompile regexes, but I didn't really pay attention to the performance gap. Care to share the numbers?
– nneonneo
Oct 9 '12 at 6:17



Hypothesis: Perl spends less time backtracking in lines that don't match due to optimisations it has that Python doesn't.



What do you get by replacing


^(.*?) INFO.*Such a record already exists



with


^((?:(?! INFO).)*?) INFO.*Such a record already



or


^(?>(.*?) INFO).*Such a record already exists



Function calls are a bit expensive in terms of time in Python. And yet you have a loop invariant function call to get the file name inside the loop:


fn = fileinput.filename()



Move this line above the for loop and you should see some improvement to your Python timing. Probably not enough to beat out Perl though.


for





+1 for the good eye, but... Well, but the filename changes. It is not a loop invariant. Anyway, it may be faster not to use the fileinput module and add another, outer loop through the filenames. Then the filename would be the invariant.
– pepr
Oct 9 '12 at 7:25


fileinput





An interesting point, but this has to be miniscule compared to the processing time of two regexes.
– dan1111
Oct 9 '12 at 8:30



In general, all artificial benchmarks are evil. However, everything else being equal (algorithmic approach), you can make improvements on a relative basis. However, it should be noted that I don't use Perl, so I can't argue in its favor. That being said, with Python you can try using Pyrex or Cython to improve performance. Or, if you are adventurous, you can try converting the Python code into C++ via ShedSkin (which works for most of the core language, and some - but not all, of the core modules).



Nevertheless, you can follow some of the tips posted here:



http://wiki.python.org/moin/PythonSpeed/PerformanceTips





i am neither an expert perl or python programmer. I use perl and python in such a way from what I read from an ordinary beginner to intermediate level book. If I care to have the real performance, certainly I will use your suggestions and even use assembly (if i ever learn it). Using what is readily available with in perl or python and its modules should be the only suggestion I expect to improve the code for performance. I don't expect to use some other magic buzzwords and spend the time to learn the rest. Please suggest pure solution that exists with in the nromal python installation.
– ihightower
Oct 9 '12 at 6:28






i understand all artificial benchmarks could be evil. But, the text processing is a simple one and this is what I do normally day in day out. So, if python cannot improve the speed at using some basic syntax with in the original python installation... (just as i do with perl)... I will have to resort to perl for my text processing tasks.. and to process the 100s or 100000s of files that I have to process... and one will have to admit that python is slow for simple text processing as given in my code. But, boy do i wish to use python for its clean syntax, but with lag of speed.. don't think so.
– ihightower
Oct 9 '12 at 6:33






Regular expresions in Python are supplied via the module. Regular expressions in Perl has the built-in syntax and can be compiled as inlines (no function-call overhead cost). Text processing need not to be that simple. Anyway, use better tool for each task. My personal experience is that a bit more complex Perl programs are much more difficult to read and maintain in future.
– pepr
Oct 9 '12 at 7:21





-1. What is "evil" about this? It is a simple exercise that illustrates a significant performance difference between the two langauges. How exactly are you supposed to compare the performance of two tools if not with a test like this? Write your entire program in both languages so that it is not "artificial"? Sure, there are pitfalls to benchmarking, but you have generalized that to a very dumb rule.
– dan1111
Oct 9 '12 at 7:26



I expect Perl be faster. Just being curious, can you try the following?


#!/usr/bin/python

import re
import glob
import sys
import os

exists_re = re.compile(r'^(.*?) INFO.*Such a record already exists', re.I)
location_re = re.compile(r'^AwbLocation (.*?) insert into', re.I)

for mask in sys.argv[1:]:
for fname in glob.glob(mask):
if os.path.isfile(fname):
f = open(fname)
for line in f:
mex = exists_re.search(line)
if mex:
xlogtime = mex.group(1)

mloc = location_re.search(line)
if mloc:
print fname, xlogtime, mloc.group(1)
f.close()



Update as reaction to "it is too complex".



Of course it looks more complex than the Perl version. The Perl was built around the regular expressions. This way, you can hardly find interpreted language that is faster in regular expressions. The Perl syntax...


while (<>) {
...
}



... also hides a lot of things that have to be done somehow in a more general language. On the other hand, it is quite easy to make the Python code more readable if you move the unreadable part out:


#!/usr/bin/python

import re
import glob
import sys
import os

def input_files():
'''The generator loops through the files defined by masks from cmd.'''
for mask in sys.argv[1:]:
for fname in glob.glob(mask):
if os.path.isfile(fname):
yield fname


exists_re = re.compile(r'^(.*?) INFO.*Such a record already exists', re.I)
location_re = re.compile(r'^AwbLocation (.*?) insert into', re.I)

for fname in input_files():
with open(fname) as f: # Now the f.close() is done automatically
for line in f:
mex = exists_re.search(line)
if mex:
xlogtime = mex.group(1)

mloc = location_re.search(line)
if mloc:
print fname, xlogtime, mloc.group(1)



Here the def input_files() could be placed elsewhere (say in another module), or it can be reused. It is possible to mimic even the Perl's while (<>) {...} easily, even though not the same way syntactically:


def input_files()


while (<>) {...}


#!/usr/bin/python

import re
import glob
import sys
import os

def input_lines():
'''The generator loops through the lines of the files defined by masks from cmd.'''
for mask in sys.argv[1:]:
for fname in glob.glob(mask):
if os.path.isfile(fname):
with open(fname) as f: # now the f.close() is done automatically
for line in f:
yield fname, line

exists_re = re.compile(r'^(.*?) INFO.*Such a record already exists', re.I)
location_re = re.compile(r'^AwbLocation (.*?) insert into', re.I)

for fname, line in input_lines():
mex = exists_re.search(line)
if mex:
xlogtime = mex.group(1)

mloc = location_re.search(line)
if mloc:
print fname, xlogtime, mloc.group(1)



Then the last for may look as easy (in principle) as the Perl's while (<>) {...}. Such readability enhancements are more difficult in Perl.


for


while (<>) {...}



Anyway, it will not make the Python program faster. Perl will be faster again here. Perl is a file/text cruncher. But--in my opinion--Python is a better programming language for more general purposes.





@ihightower Please post your attempted edit as a new answer instead.
– Craig Ringer
Oct 9 '12 at 10:47





sorry will do. thanks.
– ihightower
Oct 9 '12 at 10:47






@pepr i have posted my results as separate answer. now the code runs in 6.1 secs (2 sec improvement from earlier) compared to perl's 1.8 secs. pls read my answer for more info.
– ihightower
Oct 9 '12 at 10:57





@ihightower: Using the with construct it would be one line shorter. It is true that the nested for looks terrible. However, they say what exactly is done: 1) get the command-line arguments, 2) expand each argument as a glob mask, 3) if it is a file name, open it and process its lines.
– pepr
Oct 9 '12 at 14:22


with


for





As Text Processing is sooo universal, then why Python won't just make a builtin-in Standard Module that is so generic that it can be applied to almost all cases.. it can then improve its performance for normal users like the vast majority of people... for e.g. import TextTool or something, then have some standard stuff that will improve the performance of the Text Processing.
– ihightower
Dec 4 '13 at 6:01


Popular posts from this blog

api-platform.com Unable to generate an IRI for the item of type

How to set up datasource with Spring for HikariCP?

Display dokan vendor name on Woocommerce single product pages