The only reliable means of evaluating the performance of several regular expressions is to actually measure the search time and necessarily on different text samples, because different text samples will be processed at different times.
The number of steps that shows regex101 makes sense (my subjective opinion) only when building recursive regular expressions - in them it will allow more or less to evaluate the effectiveness of recursion.
Compare three regular expressions:
[az] - 33 steps https://regex101.com/r/vC5lG7/1^[^az]*[az] - 4 steps https://regex101.com/r/vC5lG7/2^[^az]*?[az] - 4 steps https://regex101.com/r/vC5lG7/3
Let's do a simple measurement of the execution time of these expressions:
function test( $re, $text, $iters ) { $t1 = microtime( true ); for ( $i=0; $i<$iters; $i++ ) { $res = preg_match( $re, $text ); }; $t2 = microtime( true ); echo ($t2-$t1)."<br/>"; }; $re1 = "/[az]/"; $re2 = "/^[^az]*[az]/"; $re3 = "/^[^az]*?[az]/"; $text = " a"; $iters = 50000000; test( $re1, $text, $iters ); test( $re2, $text, $iters ); test( $re3, $text, $iters );
Result:
9.5385589599609 9.1712720394135 9.6358540058136
The first regular expression is only 4% slower than the second, although the number of steps was 33 vs. 4.
The first regular expression is 1% faster than the third, although the number of steps was 33 vs. 4.
Draw your own conclusions.
PS
necessarily on different text patterns, because different text patterns will be processed at different times
Briefly explain.
Example
Given a large piece of text where the match will be at the end of the text. In such a text, greedy quantification will find it faster than the minimum, but if the match is at the beginning of a large text, then the minimum quantification will find it faster.