Cara menggunakan python regex url

This document is an introductory tutorial to using regular expressions in Python with the module. It provides a gentler introduction than the corresponding section in the Library Reference.

Introduction

Regular expressions (called REs, or regexes, or regex patterns) are essentially a tiny, highly specialized programming language embedded inside Python and made available through the module. Using this little language, you specify the rules for the set of possible strings that you want to match; this set might contain English sentences, or e-mail addresses, or TeX commands, or anything you like. You can then ask questions such as “Does this string match the pattern?”, or “Is there a match for the pattern anywhere in this string?”. You can also use REs to modify a string or to split it apart in various ways.

Regular expression patterns are compiled into a series of bytecodes which are then executed by a matching engine written in C. For advanced use, it may be necessary to pay careful attention to how the engine will execute a given RE, and write the RE in a certain way in order to produce bytecode that runs faster. Optimization isn’t covered in this document, because it requires that you have a good understanding of the matching engine’s internals.

The regular expression language is relatively small and restricted, so not all possible string processing tasks can be done using regular expressions. There are also tasks that can be done with regular expressions, but the expressions turn out to be very complicated. In these cases, you may be better off writing Python code to do the processing; while Python code will be slower than an elaborate regular expression, it will also probably be more understandable.

Simple Patterns

We’ll start by learning about the simplest possible regular expressions. Since regular expressions are used to operate on strings, we’ll begin with the most common task: matching characters.

For a detailed explanation of the computer science underlying regular expressions (deterministic and non-deterministic finite automata), you can refer to almost any textbook on writing compilers.

Matching Characters

Most letters and characters will simply match themselves. For example, the regular expression

>>> p.match("")
>>> print(p.match(""))
None
7 will match the string
>>> p.match("")
>>> print(p.match(""))
None
7 exactly. (You can enable a case-insensitive mode that would let this RE match
>>> p.match("")
>>> print(p.match(""))
None
9 or
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
0 as well; more about this later.)

There are exceptions to this rule; some characters are special metacharacters, and don’t match themselves. Instead, they signal that some out-of-the-ordinary thing should be matched, or they affect other portions of the RE by repeating them or changing their meaning. Much of this document is devoted to discussing various metacharacters and what they do.

Here’s a complete list of the metacharacters; their meanings will be discussed in the rest of this HOWTO.

. ^ $ * + ? { } [ ] \ | ( )

The first metacharacters we’ll look at are

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
1 and
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
2. They’re used for specifying a character class, which is a set of characters that you wish to match. Characters can be listed individually, or a range of characters can be indicated by giving two characters and separating them by a
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
3. For example,
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
4 will match any of the characters
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
5,
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
6, or
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
7; this is the same as
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
8, which uses a range to express the same set of characters. If you wanted to match only lowercase letters, your RE would be
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
9.

Metacharacters (except

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
0) are not active inside classes. For example,
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
1 will match any of the characters
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2,
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
3,
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
4, or
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
5;
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
5 is usually a metacharacter, but inside a character class it’s stripped of its special nature.

You can match the characters not listed within the class by complementing the set. This is indicated by including a

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
7 as the first character of the class. For example,
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
8 will match any character except
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
9. If the caret appears elsewhere in a character class, it does not have special meaning. For example:
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
0 will match either a
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
9 or a
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
7.

Perhaps the most important metacharacter is the backslash,

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
0. As in Python string literals, the backslash can be followed by various characters to signal various special sequences. It’s also used to escape all the metacharacters so you can still match them in patterns; for example, if you need to match a
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
1 or
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
0, you can precede them with a backslash to remove their special meaning:
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
6 or
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
7.

Some of the special sequences beginning with

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
8 represent predefined sets of characters that are often useful, such as the set of digits, the set of letters, or the set of anything that isn’t whitespace.

Let’s take an example:

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9 matches any alphanumeric character. If the regex pattern is expressed in bytes, this is equivalent to the class
p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
0. If the regex pattern is a string,
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9 will match all the characters marked as letters in the Unicode database provided by the module. You can use the more restricted definition of
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9 in a string pattern by supplying the flag when compiling the regular expression.

The following list of special sequences isn’t complete. For a complete list of sequences and expanded class definitions for Unicode string patterns, see the last part of in the Standard Library reference. In general, the Unicode versions match any character that’s in the appropriate category in the Unicode database.

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
5

Matches any decimal digit; this is equivalent to the class

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
6.

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
7

Matches any non-digit character; this is equivalent to the class

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
8.

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
9

Matches any whitespace character; this is equivalent to the class

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
0.

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
1

Matches any non-whitespace character; this is equivalent to the class

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
2.

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9

Matches any alphanumeric character; this is equivalent to the class

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
0.

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
5

Matches any non-alphanumeric character; this is equivalent to the class

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
6.

These sequences can be included inside a character class. For example,

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
7 is a character class that will match any whitespace character, or
>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
8 or
>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
9.

The final metacharacter in this section is

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
00. It matches anything except a newline character, and there’s an alternate mode () where it will match even a newline.
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
00 is often used where you want to match “any character”.

Repeating Things

Being able to match varying sets of characters is the first thing regular expressions can do that isn’t already possible with the methods available on strings. However, if that was the only additional capability of regexes, they wouldn’t be much of an advance. Another capability is that you can specify that portions of the RE must be repeated a certain number of times.

The first metacharacter for repeating things that we’ll look at is

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03.
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03 doesn’t match the literal character
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
05; instead, it specifies that the previous character can be matched zero or more times, instead of exactly once.

For example,

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
06 will match
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
07 (0
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2 characters),
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
09 (1
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2),
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
11 (3
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2 characters), and so forth.

Repetitions such as

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03 are greedy; when repeating a RE, the matching engine will try to repeat it as many times as possible. If later portions of the pattern don’t match, the matching engine will then back up and try again with fewer repetitions.

A step-by-step example will make this more obvious. Let’s consider the expression

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
14. This matches the letter
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2, zero or more letters from the class
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
16, and finally ends with a
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
17. Now imagine matching this RE against the string
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
18.

Step

Matched

Explanation

1

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
5

The

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
5 in the RE matches.

2

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
21

The engine matches

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
22, going as far as it can, which is to the end of the string.

3

Failure

The engine tries to match

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
6, but the current position is at the end of the string, so it fails.

4

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
24

Back up, so that

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
22 matches one less character.

5

Failure

Try

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
6 again, but the current position is at the last character, which is a
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
27.

6

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
28

Back up again, so that

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
22 is only matching
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
30.

6

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
24

Try

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
6 again. This time the character at the current position is
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
17, so it succeeds.

The end of the RE has now been reached, and it has matched

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
34. This demonstrates how the matching engine goes as far as it can at first, and if no match is found it will then progressively back up and retry the rest of the RE again and again. It will back up until it has tried zero matches for
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
22, and if that subsequently fails, the engine will conclude that the string doesn’t match the RE at all.

Another repeating metacharacter is

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36, which matches one or more times. Pay careful attention to the difference between
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03 and
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36;
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03 matches zero or more times, so whatever’s being repeated may not be present at all, while
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36 requires at least one occurrence. To use a similar example,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
41 will match
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
09 (1
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2),
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
11 (3
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2s), but won’t match
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
07.

There are two more repeating operators or quantifiers. The question mark character,

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47, matches either once or zero times; you can think of it as marking something as being optional. For example,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
48 matches either
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
49 or
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
50.

The most complicated quantifier is

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
51, where m and n are decimal integers. This quantifier means there must be at least m repetitions, and at most n. For example,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
52 will match
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
53,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
54, and
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
55. It won’t match
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
56, which has no slashes, or
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
57, which has four.

You can omit either m or n; in that case, a reasonable value is assumed for the missing value. Omitting m is interpreted as a lower limit of 0, while omitting n results in an upper bound of infinity.

Readers of a reductionist bent may notice that the three other quantifiers can all be expressed using this notation.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
58 is the same as
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
60 is equivalent to
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36, and
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
62 is the same as
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47. It’s better to use
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36, or
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47 when you can, simply because they’re shorter and easier to read.

Using Regular Expressions

Now that we’ve looked at some simple regular expressions, how do we actually use them in Python? The module provides an interface to the regular expression engine, allowing you to compile REs into objects and then perform matches with them.

Compiling Regular Expressions

Regular expressions are compiled into pattern objects, which have methods for various operations such as searching for pattern matches or performing string substitutions.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')

also accepts an optional flags argument, used to enable various special features and syntax variations. We’ll go over the available settings later, but for now a single example will do:

>>> p = re.compile('ab*', re.IGNORECASE)

The RE is passed to as a string. REs are handled as strings because regular expressions aren’t part of the core Python language, and no special syntax was created for expressing them. (There are applications that don’t need REs at all, so there’s no need to bloat the language specification by including them.) Instead, the module is simply a C extension module included with Python, just like the or modules.

Putting REs in strings keeps the Python language simpler, but has one disadvantage which is the topic of the next section.

The Backslash Plague

As stated earlier, regular expressions use the backslash character (

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
8) to indicate special forms or to allow special characters to be used without invoking their special meaning. This conflicts with Python’s usage of the same character for the same purpose in string literals.

Let’s say you want to write a RE that matches the string

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
74, which might be found in a LaTeX file. To figure out what to write in the program code, start with the desired string to be matched. Next, you must escape any backslashes and other metacharacters by preceding them with a backslash, resulting in the string
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
75. The resulting string that must be passed to must be
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
75. However, to express this as a Python string literal, both backslashes must be escaped again.

Characters

Stage

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
74

Text string to be matched

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
75

Escaped backslash for

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
81

Escaped backslashes for a string literal

In short, to match a literal backslash, one has to write

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
82 as the RE string, because the regular expression must be
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
7, and each backslash must be expressed as
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
7 inside a regular Python string literal. In REs that feature backslashes repeatedly, this leads to lots of repeated backslashes and makes the resulting strings difficult to understand.

The solution is to use Python’s raw string notation for regular expressions; backslashes are not handled in any special way in a string literal prefixed with

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
85, so
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
86 is a two-character string containing
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
8 and
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
88, while
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
89 is a one-character string containing a newline. Regular expressions will often be written in Python code using this raw string notation.

In addition, special escape sequences that are valid in regular expressions, but not valid as Python string literals, now result in a and will eventually become a , which means the sequences will be invalid if raw string notation or escaping the backslashes isn’t used.

Regular String

Raw string

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
92

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
93

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
81

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
95

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
96

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
97

Performing Matches

Once you have an object representing a compiled regular expression, what do you do with it? Pattern objects have several methods and attributes. Only the most significant ones will be covered here; consult the docs for a complete listing.

Method/Attribute

Purpose

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
99

Determine if the RE matches at the beginning of the string.

>>> p = re.compile('ab*', re.IGNORECASE)
00

Scan through a string, looking for any location where this RE matches.

>>> p = re.compile('ab*', re.IGNORECASE)
01

Find all substrings where the RE matches, and returns them as a list.

>>> p = re.compile('ab*', re.IGNORECASE)
02

Find all substrings where the RE matches, and returns them as an .

and return

>>> p = re.compile('ab*', re.IGNORECASE)
05 if no match can be found. If they’re successful, a instance is returned, containing information about the match: where it starts and ends, the substring it matched, and more.

You can learn about this by interactively experimenting with the module. If you have available, you may also want to look at Tools/demo/redemo.py, a demonstration program included with the Python distribution. It allows you to enter REs and strings, and displays whether the RE matches or fails.

>>> p = re.compile('ab*', re.IGNORECASE)
08 can be quite useful when trying to debug a complicated RE.

This HOWTO uses the standard Python interpreter for its examples. First, run the Python interpreter, import the module, and compile a RE:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')

Now, you can try matching various strings against the RE

>>> p = re.compile('ab*', re.IGNORECASE)
10. An empty string shouldn’t match at all, since
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36 means ‘one or more repetitions’. should return
>>> p = re.compile('ab*', re.IGNORECASE)
05 in this case, which will cause the interpreter to print no output. You can explicitly print the result of
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
99 to make this clear.

>>> p.match("")
>>> print(p.match(""))
None

Now, let’s try it on a string that it should match, such as

>>> p = re.compile('ab*', re.IGNORECASE)
15. In this case, will return a , so you should store the result in a variable for later use.

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>

Now you can query the for information about the matching string. Match object instances also have several methods and attributes; the most important ones are:

Method/Attribute

Purpose

>>> p = re.compile('ab*', re.IGNORECASE)
17

Return the string matched by the RE

>>> p = re.compile('ab*', re.IGNORECASE)
18

Return the starting position of the match

>>> p = re.compile('ab*', re.IGNORECASE)
19

Return the ending position of the match

>>> p = re.compile('ab*', re.IGNORECASE)
20

Return a tuple containing the (start, end) positions of the match

Trying these methods will soon clarify their meaning:

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)

returns the substring that was matched by the RE. and return the starting and ending index of the match. returns both start and end indexes in a single tuple. Since the method only checks if the RE matches at the start of a string,

>>> p = re.compile('ab*', re.IGNORECASE)
18 will always be zero. However, the method of patterns scans through the string, so the match may not start at zero in that case.

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)

In actual programs, the most common style is to store the in a variable, and then check if it was

>>> p = re.compile('ab*', re.IGNORECASE)
05. This usually looks like:

p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')

Two pattern methods return all of the matches for a pattern. returns a list of matching strings:

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']

The

>>> p = re.compile('ab*', re.IGNORECASE)
30 prefix, making the literal a raw string literal, is needed in this example because escape sequences in a normal “cooked” string literal that are not recognized by Python, as opposed to regular expressions, now result in a and will eventually become a . See .

has to create the entire list before it can be returned as the result. The method returns a sequence of instances as an :

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
0

Module-Level Functions

You don’t have to create a pattern object and call its methods; the module also provides top-level functions called , , , , and so forth. These functions take the same arguments as the corresponding pattern method with the RE string added as the first argument, and still return either

>>> p = re.compile('ab*', re.IGNORECASE)
05 or a instance.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
1

Under the hood, these functions simply create a pattern object for you and call the appropriate method on it. They also store the compiled object in a cache, so future calls using the same RE won’t need to parse the pattern again and again.

Should you use these module-level functions, or should you get the pattern and call its methods yourself? If you’re accessing a regex within a loop, pre-compiling it will save a few function calls. Outside of loops, there’s not much difference thanks to the internal cache.

Compilation Flags

Compilation flags let you modify some aspects of how regular expressions work. Flags are available in the module under two names, a long name such as

>>> p = re.compile('ab*', re.IGNORECASE)
42 and a short, one-letter form such as
>>> p = re.compile('ab*', re.IGNORECASE)
43. (If you’re familiar with Perl’s pattern modifiers, the one-letter forms use the same letters; the short form of is , for example.) Multiple flags can be specified by bitwise OR-ing them;
>>> p = re.compile('ab*', re.IGNORECASE)
46 sets both the
>>> p = re.compile('ab*', re.IGNORECASE)
43 and
>>> p = re.compile('ab*', re.IGNORECASE)
48 flags, for example.

Here’s a table of the available flags, followed by a more detailed explanation of each one.

Flag

Meaning

>>> p = re.compile('ab*', re.IGNORECASE)
49,
>>> p = re.compile('ab*', re.IGNORECASE)
50

Makes several escapes like

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9,
>>> p = re.compile('ab*', re.IGNORECASE)
52,
p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
9 and
p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
5 match only on ASCII characters with the respective property.

>>> p = re.compile('ab*', re.IGNORECASE)
55,
>>> p = re.compile('ab*', re.IGNORECASE)
56

Make

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
00 match any character, including newlines.

>>> p = re.compile('ab*', re.IGNORECASE)
42,
>>> p = re.compile('ab*', re.IGNORECASE)
43

Do case-insensitive matches.

>>> p = re.compile('ab*', re.IGNORECASE)
60,
>>> p = re.compile('ab*', re.IGNORECASE)
61

Do a locale-aware match.

>>> p = re.compile('ab*', re.IGNORECASE)
62,
>>> p = re.compile('ab*', re.IGNORECASE)
48

Multi-line matching, affecting

>>> p = re.compile('ab*', re.IGNORECASE)
64 and
>>> p = re.compile('ab*', re.IGNORECASE)
65.

>>> p = re.compile('ab*', re.IGNORECASE)
66,
>>> p = re.compile('ab*', re.IGNORECASE)
67 (for ‘extended’)

Enable verbose REs, which can be organized more cleanly and understandably.

IIGNORECASE

Perform case-insensitive matching; character class and literal strings will match letters by ignoring case. For example,

>>> p = re.compile('ab*', re.IGNORECASE)
68 will match lowercase letters, too. Full Unicode matching also works unless the
>>> p = re.compile('ab*', re.IGNORECASE)
49 flag is used to disable non-ASCII matches. When the Unicode patterns
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
9 or
>>> p = re.compile('ab*', re.IGNORECASE)
68 are used in combination with the
>>> p = re.compile('ab*', re.IGNORECASE)
42 flag, they will match the 52 ASCII letters and 4 additional non-ASCII letters: ‘İ’ (U+0130, Latin capital letter I with dot above), ‘ı’ (U+0131, Latin small letter dotless i), ‘ſ’ (U+017F, Latin small letter long s) and ‘K’ (U+212A, Kelvin sign).
>>> p = re.compile('ab*', re.IGNORECASE)
73 will match
>>> p = re.compile('ab*', re.IGNORECASE)
74,
>>> p = re.compile('ab*', re.IGNORECASE)
75,
>>> p = re.compile('ab*', re.IGNORECASE)
76, or
>>> p = re.compile('ab*', re.IGNORECASE)
77 (the latter is matched only in Unicode mode). This lowercasing doesn’t take the current locale into account; it will if you also set the
>>> p = re.compile('ab*', re.IGNORECASE)
60 flag.

LLOCALE

Make

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9,
>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
5,
>>> p = re.compile('ab*', re.IGNORECASE)
52,
>>> p = re.compile('ab*', re.IGNORECASE)
82 and case-insensitive matching dependent on the current locale instead of the Unicode database.

Locales are a feature of the C library intended to help in writing programs that take account of language differences. For example, if you’re processing encoded French text, you’d want to be able to write

>>> p = re.compile('ab*', re.IGNORECASE)
83 to match words, but
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9 only matches the character class
>>> p = re.compile('ab*', re.IGNORECASE)
85 in bytes patterns; it won’t match bytes corresponding to
>>> p = re.compile('ab*', re.IGNORECASE)
86 or
>>> p = re.compile('ab*', re.IGNORECASE)
87. If your system is configured properly and a French locale is selected, certain C functions will tell the program that the byte corresponding to
>>> p = re.compile('ab*', re.IGNORECASE)
86 should also be considered a letter. Setting the
>>> p = re.compile('ab*', re.IGNORECASE)
60 flag when compiling a regular expression will cause the resulting compiled object to use these C functions for
>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9; this is slower, but also enables
>>> p = re.compile('ab*', re.IGNORECASE)
83 to match French words as you’d expect. The use of this flag is discouraged in Python 3 as the locale mechanism is very unreliable, it only handles one “culture” at a time, and it only works with 8-bit locales. Unicode matching is already enabled by default in Python 3 for Unicode (str) patterns, and it is able to handle different locales/languages.

MMULTILINE

(

>>> p = re.compile('ab*', re.IGNORECASE)
64 and
>>> p = re.compile('ab*', re.IGNORECASE)
65 haven’t been explained yet; they’ll be introduced in section .)

Usually

>>> p = re.compile('ab*', re.IGNORECASE)
64 matches only at the beginning of the string, and
>>> p = re.compile('ab*', re.IGNORECASE)
65 matches only at the end of the string and immediately before the newline (if any) at the end of the string. When this flag is specified,
>>> p = re.compile('ab*', re.IGNORECASE)
64 matches at the beginning of the string and at the beginning of each line within the string, immediately following each newline. Similarly, the
>>> p = re.compile('ab*', re.IGNORECASE)
65 metacharacter matches either at the end of the string and at the end of each line (immediately preceding each newline).

SDOTALL

Makes the

>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
9 special character match any character at all, including a newline; without this flag,
>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
9 will match anything except a newline.

AASCII

Make

>>> print(p.match('::: message'))
None
>>> m = p.search('::: message'); print(m)
<re.Match object; span=(4, 11), match='message'>
>>> m.group()
'message'
>>> m.span()
(4, 11)
9,
>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
5,
>>> p = re.compile('ab*', re.IGNORECASE)
52,
>>> p = re.compile('ab*', re.IGNORECASE)
82,
p = re.compile( ... )
m = p.match( 'string goes here' )
if m:
    print('Match found: ', m.group())
else:
    print('No match')
9 and
>>> p = re.compile(r'\d+')
>>> p.findall('12 drummers drumming, 11 pipers piping, 10 lords a-leaping')
['12', '11', '10']
1 perform ASCII-only matching instead of full Unicode matching. This is only meaningful for Unicode patterns, and is ignored for byte patterns.

XVERBOSE

This flag allows you to write regular expressions that are more readable by granting you more flexibility in how you can format them. When this flag has been specified, whitespace within the RE string is ignored, except when the whitespace is in a character class or preceded by an unescaped backslash; this lets you organize and indent the RE more clearly. This flag also lets you put comments within a RE that will be ignored by the engine; comments are marked by a

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
06 that’s neither in a character class or preceded by an unescaped backslash.

For example, here’s a RE that uses ; see how much easier it is to read?

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
2

Without the verbose setting, the RE would look like this:

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
3

In the above example, Python’s automatic concatenation of string literals has been used to break up the RE into smaller pieces, but it’s still more difficult to understand than the version using .

More Pattern Power

So far we’ve only covered a part of the features of regular expressions. In this section, we’ll cover some new metacharacters, and how to use groups to retrieve portions of the text that was matched.

More Metacharacters

There are some metacharacters that we haven’t covered yet. Most of them will be covered in this section.

Some of the remaining metacharacters to be discussed are zero-width assertions. They don’t cause the engine to advance through the string; instead, they consume no characters at all, and simply succeed or fail. For example,

>>> p = re.compile('ab*', re.IGNORECASE)
52 is an assertion that the current position is located at a word boundary; the position isn’t changed by the
>>> p = re.compile('ab*', re.IGNORECASE)
52 at all. This means that zero-width assertions should never be repeated, because if they match once at a given location, they can obviously be matched an infinite number of times.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
11

Alternation, or the “or” operator. If A and B are regular expressions,

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
12 will match any string that matches either A or B.
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
11 has very low precedence in order to make it work reasonably when you’re alternating multi-character strings.
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
14 will match either
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
15 or
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
16, not
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
17, a
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
18 or an
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
19, and
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
20.

To match a literal

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
21, use
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
22, or enclose it inside a character class, as in
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
23.

>>> p = re.compile('ab*', re.IGNORECASE)
64

Matches at the beginning of lines. Unless the

>>> p = re.compile('ab*', re.IGNORECASE)
62 flag has been set, this will only match at the beginning of the string. In
>>> p = re.compile('ab*', re.IGNORECASE)
62 mode, this also matches immediately after each newline within the string.

For example, if you wish to match the word

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
27 only at the beginning of a line, the RE to use is
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
28.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
4

To match a literal

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
7, use
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
30.

>>> p = re.compile('ab*', re.IGNORECASE)
65

Matches at the end of a line, which is defined as either the end of the string, or any location followed by a newline character.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
5

To match a literal

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
5, use
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
33 or enclose it inside a character class, as in
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
34.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
35

Matches only at the start of the string. When not in

>>> p = re.compile('ab*', re.IGNORECASE)
62 mode,
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
35 and
>>> p = re.compile('ab*', re.IGNORECASE)
64 are effectively the same. In
>>> p = re.compile('ab*', re.IGNORECASE)
62 mode, they’re different:
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
35 still matches only at the beginning of the string, but
>>> p = re.compile('ab*', re.IGNORECASE)
64 may match at any location inside the string that follows a newline character.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
42

Matches only at the end of the string.

>>> p = re.compile('ab*', re.IGNORECASE)
52

Word boundary. This is a zero-width assertion that matches only at the beginning or end of a word. A word is defined as a sequence of alphanumeric characters, so the end of a word is indicated by whitespace or a non-alphanumeric character.

The following example matches

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
44 only when it’s a complete word; it won’t match when it’s contained inside another word.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
6

There are two subtleties you should remember when using this special sequence. First, this is the worst collision between Python’s string literals and regular expression sequences. In Python’s string literals,

>>> p = re.compile('ab*', re.IGNORECASE)
52 is the backspace character, ASCII value 8. If you’re not using raw strings, then Python will convert the
>>> p = re.compile('ab*', re.IGNORECASE)
52 to a backspace, and your RE won’t match as you expect it to. The following example looks the same as our previous RE, but omits the
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
85 in front of the RE string.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
7

Second, inside a character class, where there’s no use for this assertion,

>>> p = re.compile('ab*', re.IGNORECASE)
52 represents the backspace character, for compatibility with Python’s string literals.

>>> p = re.compile('ab*', re.IGNORECASE)
82

Another zero-width assertion, this is the opposite of

>>> p = re.compile('ab*', re.IGNORECASE)
52, only matching when the current position is not at a word boundary.

Grouping

Frequently you need to obtain more information than just whether the RE matched or not. Regular expressions are often used to dissect strings by writing a RE divided into several subgroups which match different components of interest. For example, an RFC-822 header line is divided into a header name and a value, separated by a

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
51, like this:

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
8

This can be handled by writing a regular expression which matches an entire header line, and has one group which matches the header name, and another group which matches the header’s value.

Groups are marked by the

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
52,
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
53 metacharacters.
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
52 and
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
53 have much the same meaning as they do in mathematical expressions; they group together the expressions contained inside them, and you can repeat the contents of a group with a quantifier, such as
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
36,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47, or
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
51. For example,
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
60 will match zero or more repetitions of
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
61.

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
9

Groups indicated with

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
52,
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
53 also capture the starting and ending index of the text that they match; this can be retrieved by passing an argument to , , , and . Groups are numbered starting with 0. Group 0 is always present; it’s the whole RE, so methods all have group 0 as their default argument. Later we’ll see how to express groups that don’t capture the span of text that they match.

>>> p = re.compile('ab*', re.IGNORECASE)
0

Subgroups are numbered from left to right, from 1 upward. Groups can be nested; to determine the number, just count the opening parenthesis characters, going from left to right.

>>> p = re.compile('ab*', re.IGNORECASE)
1

can be passed multiple group numbers at a time, in which case it will return a tuple containing the corresponding values for those groups.

>>> p = re.compile('ab*', re.IGNORECASE)
2

The method returns a tuple containing the strings for all the subgroups, from 1 up to however many there are.

>>> p = re.compile('ab*', re.IGNORECASE)
3

Backreferences in a pattern allow you to specify that the contents of an earlier capturing group must also be found at the current location in the string. For example,

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
70 will succeed if the exact contents of group 1 can be found at the current position, and fails otherwise. Remember that Python’s string literals also use a backslash followed by numbers to allow including arbitrary characters in a string, so be sure to use a raw string when incorporating backreferences in a RE.

For example, the following RE detects doubled words in a string.

>>> p = re.compile('ab*', re.IGNORECASE)
4

Backreferences like this aren’t often useful for just searching through a string — there are few text formats which repeat data in this way — but you’ll soon find out that they’re very useful when performing string substitutions.

Non-capturing and Named Groups

Elaborate REs may use many groups, both to capture substrings of interest, and to group and structure the RE itself. In complex REs, it becomes difficult to keep track of the group numbers. There are two features which help with this problem. Both of them use a common syntax for regular expression extensions, so we’ll look at that first.

Perl 5 is well known for its powerful additions to standard regular expressions. For these new features the Perl developers couldn’t choose new single-keystroke metacharacters or new special sequences beginning with

>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
0 without making Perl’s regular expressions confusingly different from standard REs. If they chose
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
72 as a new metacharacter, for example, old expressions would be assuming that
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
72 was a regular character and wouldn’t have escaped it by writing
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
74 or
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
75.

The solution chosen by the Perl developers was to use

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
76 as the extension syntax.
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47 immediately after a parenthesis was a syntax error because the
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47 would have nothing to repeat, so this didn’t introduce any compatibility problems. The characters immediately after the
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
47 indicate what extension is being used, so
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
80 is one thing (a positive lookahead assertion) and
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
81 is something else (a non-capturing group containing the subexpression
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
82).

Python supports several of Perl’s extensions and adds an extension syntax to Perl’s extension syntax. If the first character after the question mark is a

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
83, you know that it’s an extension that’s specific to Python.

Now that we’ve looked at the general extension syntax, we can return to the features that simplify working with groups in complex REs.

Sometimes you’ll want to use a group to denote a part of a regular expression, but aren’t interested in retrieving the group’s contents. You can make this fact explicit by using a non-capturing group:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
84, where you can replace the
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
85 with any other regular expression.

>>> p = re.compile('ab*', re.IGNORECASE)
5

Except for the fact that you can’t retrieve the contents of what the group matched, a non-capturing group behaves exactly the same as a capturing group; you can put anything inside it, repeat it with a repetition metacharacter such as

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
03, and nest it within other groups (capturing or non-capturing).
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
84 is particularly useful when modifying an existing pattern, since you can add new groups without changing how all the other groups are numbered. It should be mentioned that there’s no performance difference in searching between capturing and non-capturing groups; neither form is any faster than the other.

A more significant feature is named groups: instead of referring to them by numbers, groups can be referenced by a name.

The syntax for a named group is one of the Python-specific extensions:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
88. name is, obviously, the name of the group. Named groups behave exactly like capturing groups, and additionally associate a name with a group. The methods that deal with capturing groups all accept either integers that refer to the group by number or strings that contain the desired group’s name. Named groups are still given numbers, so you can retrieve information about a group in two ways:

>>> p = re.compile('ab*', re.IGNORECASE)
6

Additionally, you can retrieve named groups as a dictionary with :

>>> p = re.compile('ab*', re.IGNORECASE)
7

Named groups are handy because they let you use easily remembered names, instead of having to remember numbers. Here’s an example RE from the module:

>>> p = re.compile('ab*', re.IGNORECASE)
8

It’s obviously much easier to retrieve

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
91, instead of having to remember to retrieve group 9.

The syntax for backreferences in an expression such as

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
92 refers to the number of the group. There’s naturally a variant that uses the group name instead of the number. This is another Python extension:
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
93 indicates that the contents of the group called name should again be matched at the current point. The regular expression for finding doubled words,
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
94 can also be written as
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
95:

>>> p = re.compile('ab*', re.IGNORECASE)
9

Lookahead Assertions

Another zero-width assertion is the lookahead assertion. Lookahead assertions are available in both positive and negative form, and look like this:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
96

Positive lookahead assertion. This succeeds if the contained regular expression, represented here by

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
85, successfully matches at the current location, and fails otherwise. But, once the contained expression has been tried, the matching engine doesn’t advance at all; the rest of the pattern is tried right where the assertion started.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
98

Negative lookahead assertion. This is the opposite of the positive assertion; it succeeds if the contained expression doesn’t match at the current position in the string.

To make this concrete, let’s look at a case where a lookahead is useful. Consider a simple pattern to match a filename and split it apart into a base name and an extension, separated by a

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
00. For example, in
>>> p.match("")
>>> print(p.match(""))
None
00,
>>> p.match("")
>>> print(p.match(""))
None
01 is the base name, and
>>> p.match("")
>>> print(p.match(""))
None
02 is the filename’s extension.

The pattern to match this is quite simple:

>>> p.match("")
>>> print(p.match(""))
None
03

Notice that the

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
00 needs to be treated specially because it’s a metacharacter, so it’s inside a character class to only match that specific character. Also notice the trailing
>>> p = re.compile('ab*', re.IGNORECASE)
65; this is added to ensure that all the rest of the string must be included in the extension. This regular expression matches
>>> p.match("")
>>> print(p.match(""))
None
06 and
>>> p.match("")
>>> print(p.match(""))
None
07 and
>>> p.match("")
>>> print(p.match(""))
None
08 and
>>> p.match("")
>>> print(p.match(""))
None
09.

Now, consider complicating the problem a bit; what if you want to match filenames where the extension is not

>>> p.match("")
>>> print(p.match(""))
None
10? Some incorrect attempts:

>>> p.match("")
>>> print(p.match(""))
None
11 The first attempt above tries to exclude
>>> p.match("")
>>> print(p.match(""))
None
10 by requiring that the first character of the extension is not a
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
6. This is wrong, because the pattern also doesn’t match
>>> p.match("")
>>> print(p.match(""))
None
06.

>>> p.match("")
>>> print(p.match(""))
None
15

The expression gets messier when you try to patch up the first solution by requiring one of the following cases to match: the first character of the extension isn’t

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
6; the second character isn’t
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
5; or the third character isn’t
>>> p.match("")
>>> print(p.match(""))
None
18. This accepts
>>> p.match("")
>>> print(p.match(""))
None
06 and rejects
>>> p.match("")
>>> print(p.match(""))
None
07, but it requires a three-letter extension and won’t accept a filename with a two-letter extension such as
>>> p.match("")
>>> print(p.match(""))
None
08. We’ll complicate the pattern again in an effort to fix it.

>>> p.match("")
>>> print(p.match(""))
None
22

In the third attempt, the second and third letters are all made optional in order to allow matching extensions shorter than three characters, such as

>>> p.match("")
>>> print(p.match(""))
None
08.

The pattern’s getting really complicated now, which makes it hard to read and understand. Worse, if the problem changes and you want to exclude both

>>> p.match("")
>>> print(p.match(""))
None
10 and
>>> p.match("")
>>> print(p.match(""))
None
25 as extensions, the pattern would get even more complicated and confusing.

A negative lookahead cuts through all this confusion:

>>> p.match("")
>>> print(p.match(""))
None
26 The negative lookahead means: if the expression
>>> p.match("")
>>> print(p.match(""))
None
10 doesn’t match at this point, try the rest of the pattern; if
>>> p.match("")
>>> print(p.match(""))
None
28 does match, the whole pattern will fail. The trailing
>>> p = re.compile('ab*', re.IGNORECASE)
65 is required to ensure that something like
>>> p.match("")
>>> print(p.match(""))
None
30, where the extension only starts with
>>> p.match("")
>>> print(p.match(""))
None
10, will be allowed. The
>>> p.match("")
>>> print(p.match(""))
None
32 makes sure that the pattern works when there are multiple dots in the filename.

Excluding another filename extension is now easy; simply add it as an alternative inside the assertion. The following pattern excludes filenames that end in either

>>> p.match("")
>>> print(p.match(""))
None
10 or
>>> p.match("")
>>> print(p.match(""))
None
25:

>>> p.match("")
>>> print(p.match(""))
None
35

Modifying Strings

Up to this point, we’ve simply performed searches against a static string. Regular expressions are also commonly used to modify strings in various ways, using the following pattern methods:

Method/Attribute

Purpose

>>> p.match("")
>>> print(p.match(""))
None
36

Split the string into a list, splitting it wherever the RE matches

>>> p = re.compile('ab*', re.IGNORECASE)
39

Find all substrings where the RE matches, and replace them with a different string

>>> p.match("")
>>> print(p.match(""))
None
38

Does the same thing as

>>> p = re.compile('ab*', re.IGNORECASE)
39, but returns the new string and the number of replacements

Splitting Strings

The method of a pattern splits a string apart wherever the RE matches, returning a list of the pieces. It’s similar to the method of strings but provides much more generality in the delimiters that you can split by; string

>>> p.match("")
>>> print(p.match(""))
None
36 only supports splitting by whitespace or by a fixed string. As you’d expect, there’s a module-level function, too.

.split(string[, maxsplit=0])

Split string by the matches of the regular expression. If capturing parentheses are used in the RE, then their contents will also be returned as part of the resulting list. If maxsplit is nonzero, at most maxsplit splits are performed.

You can limit the number of splits made, by passing a value for maxsplit. When maxsplit is nonzero, at most maxsplit splits will be made, and the remainder of the string is returned as the final element of the list. In the following example, the delimiter is any sequence of non-alphanumeric characters.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
0

Sometimes you’re not only interested in what the text between delimiters is, but also need to know what the delimiter was. If capturing parentheses are used in the RE, then their values are also returned as part of the list. Compare the following calls:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
1

The module-level function adds the RE to be used as the first argument, but is otherwise the same.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
2

Search and Replace

Another common task is to find all the matches for a pattern, and replace them with a different string. The method takes a replacement value, which can be either a string or a function, and the string to be processed.

.sub(replacement, string[, count=0])

Returns the string obtained by replacing the leftmost non-overlapping occurrences of the RE in string by the replacement replacement. If the pattern isn’t found, string is returned unchanged.

The optional argument count is the maximum number of pattern occurrences to be replaced; count must be a non-negative integer. The default value of 0 means to replace all occurrences.

Here’s a simple example of using the method. It replaces colour names with the word

>>> p.match("")
>>> print(p.match(""))
None
47:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
3

The method does the same work, but returns a 2-tuple containing the new string value and the number of replacements that were performed:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
4

Empty matches are replaced only when they’re not adjacent to a previous empty match.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
5

If replacement is a string, any backslash escapes in it are processed. That is,

>>> p.match("")
>>> print(p.match(""))
None
49 is converted to a single newline character,
>>> p.match("")
>>> print(p.match(""))
None
50 is converted to a carriage return, and so forth. Unknown escapes such as
>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
74 are left alone. Backreferences, such as
>>> p.match("")
>>> print(p.match(""))
None
52, are replaced with the substring matched by the corresponding group in the RE. This lets you incorporate portions of the original text in the resulting replacement string.

This example matches the word

>>> p.match("")
>>> print(p.match(""))
None
53 followed by a string enclosed in
>>> p.match("")
>>> print(p.match(""))
None
54,
>>> p.match("")
>>> print(p.match(""))
None
55, and changes
>>> p.match("")
>>> print(p.match(""))
None
53 to
>>> p.match("")
>>> print(p.match(""))
None
57:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
6

There’s also a syntax for referring to named groups as defined by the

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
88 syntax.
>>> p.match("")
>>> print(p.match(""))
None
59 will use the substring matched by the group named
>>> p.match("")
>>> print(p.match(""))
None
60, and
>>> p.match("")
>>> print(p.match(""))
None
61 uses the corresponding group number.
>>> p.match("")
>>> print(p.match(""))
None
62 is therefore equivalent to
>>> p.match("")
>>> print(p.match(""))
None
63, but isn’t ambiguous in a replacement string such as
>>> p.match("")
>>> print(p.match(""))
None
64. (
>>> p.match("")
>>> print(p.match(""))
None
65 would be interpreted as a reference to group 20, not a reference to group 2 followed by the literal character
>>> p.match("")
>>> print(p.match(""))
None
66.) The following substitutions are all equivalent, but use all three variations of the replacement string.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
7

replacement can also be a function, which gives you even more control. If replacement is a function, the function is called for every non-overlapping occurrence of pattern. On each call, the function is passed a argument for the match and can use this information to compute the desired replacement string and return it.

In the following example, the replacement function translates decimals into hexadecimal:

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
8

When using the module-level function, the pattern is passed as the first argument. The pattern may be provided as an object or as a string; if you need to specify regular expression flags, you must either use a pattern object as the first parameter, or use embedded modifiers in the pattern string, e.g.

>>> p.match("")
>>> print(p.match(""))
None
68 returns
>>> p.match("")
>>> print(p.match(""))
None
69.

Common Problems

Regular expressions are a powerful tool for some applications, but in some ways their behaviour isn’t intuitive and at times they don’t behave the way you may expect them to. This section will point out some of the most common pitfalls.

Use String Methods

Sometimes using the module is a mistake. If you’re matching a fixed string, or a single character class, and you’re not using any features such as the flag, then the full power of regular expressions may not be required. Strings have several methods for performing operations with fixed strings and they’re usually much faster, because the implementation is a single small C loop that’s been optimized for the purpose, instead of the large, more generalized regular expression engine.

One example might be replacing a single fixed string with another one; for example, you might replace

>>> p.match("")
>>> print(p.match(""))
None
73 with
>>> p.match("")
>>> print(p.match(""))
None
74. seems like the function to use for this, but consider the method. Note that
>>> p.match("")
>>> print(p.match(""))
None
76 will also replace
>>> p.match("")
>>> print(p.match(""))
None
73 inside words, turning
>>> p.match("")
>>> print(p.match(""))
None
79 into
>>> p.match("")
>>> print(p.match(""))
None
80, but the naive RE
>>> p.match("")
>>> print(p.match(""))
None
73 would have done that, too. (To avoid performing the substitution on parts of words, the pattern would have to be
>>> p.match("")
>>> print(p.match(""))
None
82, in order to require that
>>> p.match("")
>>> print(p.match(""))
None
73 have a word boundary on either side. This takes the job beyond
>>> p.match("")
>>> print(p.match(""))
None
76’s abilities.)

Another common task is deleting every occurrence of a single character from a string or replacing it with another single character. You might do this with something like

>>> p.match("")
>>> print(p.match(""))
None
85, but is capable of doing both tasks and will be faster than any regular expression operation can be.

In short, before turning to the module, consider whether your problem can be solved with a faster and simpler string method.

The function only checks if the RE matches at the beginning of the string while will scan forward through the string for a match. It’s important to keep this distinction in mind. Remember,

>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
99 will only report a successful match which will start at 0; if the match wouldn’t start at zero,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
99 will not report it.

>>> import re
>>> p = re.compile('[a-z]+')
>>> p
re.compile('[a-z]+')
9

On the other hand, will scan forward through the string, reporting the first match it finds.

>>> p.match("")
>>> print(p.match(""))
None
0

Sometimes you’ll be tempted to keep using , and just add

>>> p.match("")
>>> print(p.match(""))
None
94 to the front of your RE. Resist this temptation and use instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis figures out what the first character of a match must be; for example, a pattern starting with
>>> p.match("")
>>> print(p.match(""))
None
96 must match starting with a
>>> p.match("")
>>> print(p.match(""))
None
97. The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a
>>> p.match("")
>>> print(p.match(""))
None
97 is found.

Adding

>>> p.match("")
>>> print(p.match(""))
None
94 defeats this optimization, requiring scanning to the end of the string and then backtracking to find a match for the rest of the RE. Use instead.

Greedy versus Non-Greedy

When repeating a regular expression, as in

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
01, the resulting action is to consume as much of the pattern as possible. This fact often bites you when you’re trying to match a pair of balanced delimiters, such as the angle brackets surrounding an HTML tag. The naive pattern for matching a single HTML tag doesn’t work because of the greedy nature of
>>> p.match("")
>>> print(p.match(""))
None
94.

>>> p.match("")
>>> print(p.match(""))
None
1

The RE matches the

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
03 in
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
04, and the
>>> p.match("")
>>> print(p.match(""))
None
94 consumes the rest of the string. There’s still more left in the RE, though, and the
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
06 can’t match at the end of the string, so the regular expression engine has to backtrack character by character until it finds a match for the
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
06. The final match extends from the
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
03 in
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
04 to the
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
10 in
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
11, which isn’t what you want.

In this case, the solution is to use the non-greedy quantifiers

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
12,
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
13,
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
14, or
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
15, which match as little text as possible. In the above example, the
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
10 is tried immediately after the first
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
03 matches, and when it fails, the engine advances a character at a time, retrying the
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
10 at every step. This produces just the right result:

>>> p.match("")
>>> print(p.match(""))
None
2

(Note that parsing HTML or XML with regular expressions is painful. Quick-and-dirty patterns will handle common cases, but HTML and XML have special cases that will break the obvious regular expression; by the time you’ve written a regular expression that handles all of the possible cases, the patterns will be very complicated. Use an HTML or XML parser module for such tasks.)

Using re.VERBOSE

By now you’ve probably noticed that regular expressions are a very compact notation, but they’re not terribly readable. REs of moderate complexity can become lengthy collections of backslashes, parentheses, and metacharacters, making them difficult to read and understand.

For such REs, specifying the flag when compiling the regular expression can be helpful, because it allows you to format the regular expression more clearly.

The

>>> p = re.compile('ab*', re.IGNORECASE)
44 flag has several effects. Whitespace in the regular expression that isn’t inside a character class is ignored. This means that an expression such as
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
21 is equivalent to the less readable
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
22, but
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
23 will still match the characters
>>> m.group()
'tempo'
>>> m.start(), m.end()
(0, 5)
>>> m.span()
(0, 5)
2,
>>> import re
>>> p = re.compile('ab*')
>>> p
re.compile('ab*')
17, or a space. In addition, you can also put comments inside a RE; comments extend from a
>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
26 character to the next newline. When used with triple-quoted strings, this enables REs to be formatted more neatly:

>>> p.match("")
>>> print(p.match(""))
None
3

This is far more readable than:

>>> p.match("")
>>> print(p.match(""))
None
4

Feedback

Regular expressions are a complicated topic. Did this document help you understand them? Were there parts that were unclear, or Problems you encountered that weren’t covered here? If so, please send suggestions for improvements to the author.

The most complete book on regular expressions is almost certainly Jeffrey Friedl’s Mastering Regular Expressions, published by O’Reilly. Unfortunately, it exclusively concentrates on Perl and Java’s flavours of regular expressions, and doesn’t contain any Python material at all, so it won’t be useful as a reference for programming in Python. (The first edition covered Python’s now-removed

>>> m = p.match('tempo')
>>> m
<re.Match object; span=(0, 5), match='tempo'>
27 module, which won’t help you much.) Consider checking it out from your library.