Regular expressions are powerful. There's no doubt about it. The .NET and Perl-derived implementations in particular are rich and capable.
For the most part regular expressions are there to save you time from parsing text the hard way. But if you're spending more time bending regular expressions to your will to accomplish something that could be done more easily and efficiently with procedural code, then that kind of defeats the purpose.
I've wanted to write this article for awhile. Then today I stumbled across this StackOverflow question which is a prime example where the procedural solution was actually quicker & easier to write, more understandable, and more efficient. Your ability to identify these situations will improve naturally with experience. But I thought I'd list a few good & bad scenarios for regular expressions...
Good
Data validation can be done easily and concisely with regular expressions in most cases.
Performing search or search/replace operations on documents can be done with regex internally without much trouble. There are cases where the traditional wildcard search isn't powerful enough and a regex may be needed to find words near neighboring words or punctuation.
Bad
Public websites should not allow users to enter regular expressions for searching. Giving the full power of regex to the general public for a website's search engine could have a devastating effect. There is such a thing as a regular expression denial of service (ReDoS) attack that should be avoided at all costs.
Bad
HTML/XML parsing should not be done with regular expressions. First of all, regular expressions are designed to parse a regular language which is the simplest among the Chomsky hierarchy. Now, with the advent of balancing group definitions in the .NET flavor of regular expressions you can venture into slightly more complex territory and do a few things with XML or HTML in controlled situations. However, there's not much point. There are parsers available for both XML and HTML which will do the job more easily, more efficiently, and more reliably. In .NET, XML can be handled the old XmlDocument way or even more easily with Linq to XML. Or for HTML there's the HTML Agility Pack.
Conclusion
Regular expressions have their uses. I still contend that in many cases they can save the programmer a lot of time and effort. Of course, given infinite time & resources, one could almost always build a procedural solution that's more efficient than an equivalent regular expression.
Your decision to abandon regex should be based on 3 things:
1.) Is the regular expression so slow in your scenario that it has become a bottleneck?
There's a general piece of advice that can be applied to many things and this is certainly one of them: when there's a better tool available, use it.
For the most part regular expressions are there to save you time from parsing text the hard way. But if you're spending more time bending regular expressions to your will to accomplish something that could be done more easily and efficiently with procedural code, then that kind of defeats the purpose.
I've wanted to write this article for awhile. Then today I stumbled across this StackOverflow question which is a prime example where the procedural solution was actually quicker & easier to write, more understandable, and more efficient. Your ability to identify these situations will improve naturally with experience. But I thought I'd list a few good & bad scenarios for regular expressions...
Good
Data validation can be done easily and concisely with regular expressions in most cases.
Good
Syntax highlighting can be done with regular expressions, or in some cases a combination of procedural code with regex.
GoodPerforming search or search/replace operations on documents can be done with regex internally without much trouble. There are cases where the traditional wildcard search isn't powerful enough and a regex may be needed to find words near neighboring words or punctuation.
Bad
Public websites should not allow users to enter regular expressions for searching. Giving the full power of regex to the general public for a website's search engine could have a devastating effect. There is such a thing as a regular expression denial of service (ReDoS) attack that should be avoided at all costs.
Bad
HTML/XML parsing should not be done with regular expressions. First of all, regular expressions are designed to parse a regular language which is the simplest among the Chomsky hierarchy. Now, with the advent of balancing group definitions in the .NET flavor of regular expressions you can venture into slightly more complex territory and do a few things with XML or HTML in controlled situations. However, there's not much point. There are parsers available for both XML and HTML which will do the job more easily, more efficiently, and more reliably. In .NET, XML can be handled the old XmlDocument way or even more easily with Linq to XML. Or for HTML there's the HTML Agility Pack.
Conclusion
Regular expressions have their uses. I still contend that in many cases they can save the programmer a lot of time and effort. Of course, given infinite time & resources, one could almost always build a procedural solution that's more efficient than an equivalent regular expression.
Your decision to abandon regex should be based on 3 things:
1.) Is the regular expression so slow in your scenario that it has become a bottleneck?
2.) Is your procedural solution actually quicker & easier to write than the regular expression?
3.) Is there a specialized parser that will do the job better?There's a general piece of advice that can be applied to many things and this is certainly one of them: when there's a better tool available, use it.
Comments
Post a Comment