HEX
Server: LiteSpeed
System: Linux us-phx-web1284.main-hosting.eu 4.18.0-553.109.1.lve.el8.x86_64 #1 SMP Thu Mar 5 20:23:46 UTC 2026 x86_64
User: u300739242 (300739242)
PHP: 8.2.30
Disabled: system, shell_exec, passthru, mysql_list_dbs, ini_alter, dl, symlink, link, chgrp, leak, popen, apache_child_terminate, virtual, mb_send_mail
Upload Files
File: //opt/alt/python311/lib/python3.11/site-packages/pygments/__pycache__/lexer.cpython-311.pyc
�

�;f%�����dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZgd�Zejd	��Zgd
�Zed���ZGd�d
e��ZGd�de���Z Gd�de ��Z!Gd�de"��Z#Gd�d��Z$e$��Z%Gd�de&��Z'Gd�d��Z(d�Z)Gd�d��Z*e*��Z+d�Z,Gd�d ��Z-Gd!�d"e��Z.Gd#�d$e��Z/Gd%�d&e e/���Z0Gd'�d(��Z1Gd)�d*e0��Z2d+�Z3Gd,�d-e/��Z4Gd.�d/e0e4���Z5dS)0z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)
�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
))s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16bec��dS)N��)�xs �A/opt/alt/python311/lib/python3.11/site-packages/pygments/lexer.py�<lambda>r%"s��#��c��eZdZdZd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    c�t�d|vrt|d��|d<t�||||��S)N�analyse_text)r�type�__new__)�mcs�name�bases�ds    r$r,zLexerMeta.__new__+s<���Q��� /��.�0A� B� B�A�n���|�|�C��u�a�0�0�0r&N)�__name__�
__module__�__qualname__�__doc__r,r"r&r$r(r(%s-��������
1�1�1�1�1r&r(c�b�eZdZdZdZgZgZgZgZdZ	dZ
dZdZd�Z
d�Zd�Zd�Zd�Zdd
�Zd�ZdS)
ra"
    Lexer for a specific language.

    See also :doc:`lexerdevelopment`, a high-level guide to writing
    lexers.

    Lexer classes have attributes used for choosing the most appropriate
    lexer based on various criteria.

    .. autoattribute:: name
       :no-value:
    .. autoattribute:: aliases
       :no-value:
    .. autoattribute:: filenames
       :no-value:
    .. autoattribute:: alias_filenames
    .. autoattribute:: mimetypes
       :no-value:
    .. autoattribute:: priority

    Lexers included in Pygments should have two additional attributes:

    .. autoattribute:: url
       :no-value:
    .. autoattribute:: version_added
       :no-value:

    Lexers included in Pygments may have additional attributes:

    .. autoattribute:: _example
       :no-value:

    You can pass options to the constructor. The basic options recognized
    by all lexers and processed by the base `Lexer` class are:

    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    Nrc��||_t|dd��|_t|dd��|_t|dd��|_t|dd��|_|�dd	��|_|�d
��p|j|_g|_	t|dd��D]}|�|���d
S)a�
        This constructor takes arbitrary options as keyword arguments.
        Every subclass must first process its own options and then call
        the `Lexer` constructor, since it processes the basic
        options like `stripnl`.

        An example looks like this:

        .. sourcecode:: python

           def __init__(self, **options):
               self.compress = options.get('compress', '')
               Lexer.__init__(self, **options)

        As these options must all be specifiable as strings (due to the
        command line usage), there are various utility functions
        available to help with that, see `Utilities`_.
        �stripnlT�stripallF�ensurenl�tabsizer�encoding�guess�
inencoding�filtersr"N)�optionsrr7r8r9rr:�getr;r>r
�
add_filter)�selfr?�filter_s   r$�__init__zLexer.__init__�s���&���#�G�Y��=�=���$�W�j�%�@�@��
�$�W�j�$�?�?��
�"�7�I�q�9�9������J��8�8��
����L�1�1�B�T�]��
����#�G�Y��;�;�	%�	%�G��O�O�G�$�$�$�$�	%�	%r&c�`�|jrd|jj�d|j�d�Sd|jj�d�S)Nz<pygments.lexers.z with �>)r?�	__class__r1�rBs r$�__repr__zLexer.__repr__�sC���<�	B�W�t�~�'>�W�W�d�l�W�W�W�W�A�t�~�'>�A�A�A�Ar&c�~�t|t��s
t|fi|��}|j�|��dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr>�append)rBrCr?s   r$rAzLexer.add_filter�sG���'�6�*�*�	=�(��<�<�G�<�<�G�����G�$�$�$�$�$r&c��dS)a�
        A static method which is called for lexer guessing.

        It should analyse the text and return a float in the range
        from ``0.0`` to ``1.0``.  If it returns ``0.0``, the lexer
        will not be selected as the most probable one, if it returns
        ``1.0``, it will be selected immediately.  This is used by
        `guess_lexer`.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr")�texts r$r*zLexer.analyse_text�s���r&c�L�t|t���s7|jdkrt|��\}}�nD|jdkr�	ddl}n"#t
$r}td��|�d}~wwxYwd}tD]G\}}|�|��r-|t|��d��	|d��}n�H|�H|�
|dd���}|�	|�d��pd	d��}|}ns|�	|j��}|�d
��r|td
��d�}n,|�d
��r|td
��d�}|�dd��}|�d
d��}|j
r|���}n|jr|�d��}|jdkr|�|j��}|jr|�d��s|dz
}|S)zVApply preprocessing such as decoding the input, removing BOM and normalizing newlines.r<�chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/�replaceir;ruz
�
�
)rK�strr;rrP�ImportError�
_encoding_map�
startswith�len�decode�detectr@rQr8�stripr7r:�
expandtabsr9�endswith)	rBrN�_rP�e�decoded�bomr;�encs	         r$�_preprocess_lexer_inputzLexer._preprocess_lexer_input�sK���$��$�$�	,��}��'�'�&�t�,�,���a�a���)�+�+�T�"�N�N�N�N��"�T�T�T�%�'L�M�M�RS�T�����T����
��%2���M�C�����s�+�+��"&�s�3�x�x�y�y�/�"8�"8��9�"M�"M������?�!�.�.��e�t�e��5�5�C�"�k�k�#�'�'�*�*=�*=�*H��*3�5�5�G�����{�{�4�=�1�1���?�?�8�,�,�0���H�
�
���/�D�����x�(�(�
,��C��M�M�N�N�+���|�|�F�D�)�)���|�|�D�$�'�'���=�	$��:�:�<�<�D�D�
�\�	$��:�:�d�#�#�D��<�!����?�?�4�<�0�0�D��=�	����t�!4�!4�	��D�L�D��s�A�
A&�A!�!A&Fc������������fd�}|��}|st|�j���}|S)ae
        This method is the basic interface of a lexer. It is called by
        the `highlight()` function. It must process the text and return an
        iterable of ``(tokentype, value)`` pairs from `text`.

        Normally, you don't need to override this method. The default
        implementation processes the options recognized by all lexers
        (`stripnl`, `stripall` and so on), and then yields all tokens
        from `get_tokens_unprocessed()`, with the ``index`` dropped.

        If `unfiltered` is set to `True`, the filtering mechanism is
        bypassed even if filters are defined.
        c3�P�K������D]\}}}||fV��
dS�N)�get_tokens_unprocessed)r^�t�vrBrNs   ��r$�streamerz"Lexer.get_tokens.<locals>.streamer
sC������6�6�t�<�<�
�
���1�a���d�
�
�
�
�
�
r&)rcrr>)rBrN�
unfilteredrj�streams``   r$�
get_tokenszLexer.get_tokens�sd�����+�+�D�1�1��	�	�	�	�	�	�������	?�"�6�4�<��>�>�F��
r&c��t�)aS
        This method should process the text and return an iterable of
        ``(index, tokentype, value)`` tuples where ``index`` is the starting
        position of the token within the input text.

        It must be overridden by subclasses. It is recommended to
        implement it as a generator to maximize effectiveness.
        )�NotImplementedError)rBrNs  r$rgzLexer.get_tokens_unprocesseds
��"�!r&)F)r1r2r3r4r.�aliases�	filenames�alias_filenames�	mimetypes�priority�url�
version_added�_examplerDrIrAr*rcrmrgr"r&r$rr1s�������8�8�v�D��G�
�I��O��I��H��C��M��H�%�%�%�<B�B�B�%�%�%����"-�-�-�^����0	"�	"�	"�	"�	"r&r)�	metaclassc�"�eZdZdZefd�Zd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    c�l�|di|��|_|di|��|_||_tj|fi|��dS�Nr")�
root_lexer�language_lexer�needlerrD)rB�_root_lexer�_language_lexer�_needler?s     r$rDzDelegatingLexer.__init__+sV��%�+�0�0��0�0���-�o�8�8��8�8������
��t�'�'�w�'�'�'�'�'r&c��d}g}g}|j�|��D]U\}}}||jur.|r&|�t	|��|f��g}||z
}�=|�|||f���V|r$|�t	|��|f��t||j�|����S)N�)r}rgr~rLrX�
do_insertionsr|)rBrN�buffered�
insertions�
lng_buffer�irhris        r$rgz&DelegatingLexer.get_tokens_unprocessed1s������
��
��*�A�A�$�G�G�	-�	-�G�A�q�!��D�K����$��%�%�s�8�}�}�j�&A�B�B�B�!#�J��A�
����!�!�1�a��)�,�,�,�,��	;����s�8�}�}�j�9�:�:�:��Z�!�_�C�C�H�M�M�O�O�	Or&N)r1r2r3r4rrDrgr"r&r$rr!sL��������>C�(�(�(�(�O�O�O�O�Or&rc��eZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N�r1r2r3r4r"r&r$rrHs��������	�Dr&rc��eZdZdZd�ZdS)�_inheritzC
    Indicates the a state should inherit from its superclass.
    c��dS)Nrr"rHs r$rIz_inherit.__repr__Ss���yr&N)r1r2r3r4rIr"r&r$r�r�Os-������������r&r�c��eZdZdZd�Zd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    c�8�t�||��Srf)�tupler,)�cls�argss  r$r,zcombined.__new__^s���}�}�S�$�'�'�'r&c��dSrfr")rBr�s  r$rDzcombined.__init__as���r&N)r1r2r3r4r,rDr"r&r$r�r�Ys<��������(�(�(�
�
�
�
�
r&r�c�<�eZdZdZd�Zd	d�Zd	d�Zd	d�Zd�Zd�Z	dS)
�_PseudoMatchz:
    A pseudo match object constructed from a string.
    c�"�||_||_dSrf)�_text�_start)rB�startrNs   r$rDz_PseudoMatch.__init__ks����
�����r&Nc��|jSrf)r��rB�args  r$r�z_PseudoMatch.startos
���{�r&c�:�|jt|j��zSrf)r�rXr�r�s  r$�endz_PseudoMatch.endrs���{�S���_�_�,�,r&c�2�|rtd���|jS)Nz
No such group)�
IndexErrorr�r�s  r$�groupz_PseudoMatch.groupus ���	.��_�-�-�-��z�r&c��|jfSrf)r�rHs r$�groupsz_PseudoMatch.groupszs���
�}�r&c��iSrfr"rHs r$�	groupdictz_PseudoMatch.groupdict}s���	r&rf)
r1r2r3r4rDr�r�r�r�r�r"r&r$r�r�fs����������������-�-�-�-�����
�������r&r�c���d�fd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc
3��K�t���D]�\}}|��t|��tur8|�|dz��}|r|�|dz��||fV��V|�|dz��}|�Y|r|�|dz��|_||t
|�|dz��|��|��D]}|r|V��	��|r|���|_dSdS)N�)�	enumerater+r
r�r��posr�r�)�lexer�match�ctxr��action�data�itemr�s       �r$�callbackzbygroups.<locals>.callback�s*�����"�4���	'�	'�I�A�v��~���f����+�+��{�{�1�q�5�)�)���;��+�+�a�!�e�,�,�f�d�:�:�:�:���{�{�1�q�5�)�)���#��5�"'�+�+�a�!�e�"4�"4��� &��u�'3�E�K�K��A��4F�4F��'M�'M�s�!T�!T�'�'���'�"&�J�J�J����	"��i�i�k�k�C�G�G�G�	"�	"r&rfr")r�r�s` r$rr�s(���"�"�"�"�"�"�&�Or&c��eZdZdZdS)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nr�r"r&r$r�r��s���������r&r�c������i�d�vr>��d��}t|ttf��r|�d<nd|f�d<�turd��fd�	}nd���fd�	}|S)a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3�(�K��	r(�	�|j��|jdi�	��}n|}|���}|j|���fi���D]\}}}||z||fV��|r|���|_dSdSr{)�updater?rGr�rgr�r�r�)
r�r�r��lx�sr�rhri�	gt_kwargs�kwargss
        ��r$r�zusing.<locals>.callback�s�������
��
�
�e�m�,�,�,�$�U�_�.�.�v�.�.��������
�
�A�4�2�4�U�[�[�]�]�P�P�i�P�P�
"�
"���1�a��!�e�Q��k�!�!�!�!��
&��)�)�+�+�����
&�
&r&c3��K��
�|j���di�
��}|���}|j|���fi�	��D]\}}}||z||fV��|r|���|_dSdSr{)r�r?r�rgr�r�r�)r�r�r�r�r�r�rhri�_otherr�r�s        ���r$r�zusing.<locals>.callback�s�������M�M�%�-�(�(�(���!�!�&�!�!�B����
�
�A�4�2�4�U�[�[�]�]�P�P�i�P�P�
"�
"���1�a��!�e�Q��k�!�!�!�!��
&��)�)�+�+�����
&�
&r&rf)�poprK�listr�r)r�r�r�r�r�s``  @r$rr�s�������I��&����J�J�w�����a�$���'�'�	-�!"�I�g���"(�!��I�g��
��~�~�
	&�
	&�
	&�
	&�
	&�
	&�
	&�
	&�		&�		&�		&�		&�		&�		&�		&�		&��Or&c��eZdZdZd�ZdS)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    c��||_dSrf)r�)rBr�s  r$rDzdefault.__init__�s
����
�
�
r&N)r1r2r3r4rDr"r&r$rr�s-������������r&rc� �eZdZdZdd�Zd�ZdS)rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    r�c�0�||_||_||_dSrf)r�prefix�suffix)rBrr�r�s    r$rDzwords.__init__�s����
��������r&c�D�t|j|j|j���S)N�r�r�)rrr�r�rHs r$r@z	words.get�s�����D�K���L�L�L�Lr&N)r�r�)r1r2r3r4rDr@r"r&r$rr�sF������������
M�M�M�M�Mr&rc�>�eZdZdZd�Zd�Zd�Zd�Zd
d�Zd�Z	d	�Z
dS)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    c��t|t��r|���}tj||��jS)zBPreprocess the regular expression component of a token definition.)rKrr@�re�compiler�)r��regex�rflagsr�s    r$�_process_regexzRegexLexerMeta._process_regex�s6���e�V�$�$�	 ��I�I�K�K�E��z�%��(�(�.�.r&c�j�t|��tust|��s
Jd|�����|S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r+r
�callable)r��tokens  r$�_process_tokenzRegexLexerMeta._process_token�s<���E�{�{�j�(�(�H�U�O�O�(�(�H�u�H�H�)�(�(��r&c�2�t|t��rJ|dkrdS||vr|fS|dkr|S|dd�dkrt|dd���SJd|�����t|t��rfd	|jz}|xjd
z
c_g}|D]?}||ks
Jd|�����|�|�|||�����@|||<|fSt|t��r|D]}||vs|dvs
Jd|z����|SJd
|�����)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:Fzunknown new state z_tmp_%dr�zcircular state ref )r�r�zunknown new state def )rKrT�intr��_tmpname�extend�_process_stater�)r��	new_state�unprocessed�	processed�	tmp_state�itokens�istates       r$�_process_new_statez!RegexLexerMeta._process_new_states����i��%�%�	A��F�"�"��r��k�)�)�!�|�#��g�%�%� � ��2�A�2��'�)�)��I�a�b�b�M�*�*�*�*�@�@�9�@�@�@�@�@�
�	�8�
,�
,�	A�!�C�L�0�I��L�L�A��L�L��G�#�
F�
F����*�*�*�,L�&�,L�,L�*�*�*����s�1�1�+�2;�V� E� E�F�F�F�F�#*�I�i� ��<��
�	�5�
)�
)�	A�#�
2�
2���+�-�-��"3�3�3�3�(�6�1�4�3�3����@�@�9�@�@�@�@�@r&c�0�t|t��s
Jd|�����|ddks
Jd|�����||vr||Sgx}||<|j}||D�]�}t|t��rK||ks
Jd|�����|�|�||t|�������ct|t��r�yt|t��rL|�|j	||��}|�
tjd��j
d|f����t|��tus
Jd|�����	|�|d||��}n4#t"$r'}	t%d	|d�d
|�d|�d|	����|	�d}	~	wwxYw|�|d
��}
t)|��dkrd}n|�|d||��}|�
||
|f�����|S)z%Preprocess a single state definition.zwrong state name r�#zinvalid state name zcircular state reference r�Nzwrong rule def zuncompilable regex z
 in state z of z: r��)rKrT�flagsrr�r�r�rr�r�rLr�r�r�r+r�r��	Exception�
ValueErrorr�rX)r�r�r�r��tokensr��tdefr��rex�errr�s           r$r�zRegexLexerMeta._process_state'sr���%��%�%�D�D�'D�5�'D�'D�D�D�D��Q�x�3���� ?�e� ?� ?�����I����U�#�#�$&�&���5�!������&� 	3� 	3�D��$��(�(�
��u�}�}�}�&K�%�&K�&K�}�}�}��
�
�c�0�0��i�14�T���<�<�=�=�=���$��)�)�
���$��(�(�
��2�2�4�:�{�I�V�V�	��
�
�r�z�"�~�~�3�T�9�E�F�F�F����:�:��&�&�&�(B�$�(B�(B�&�&�&�
r��(�(��a��&�%�@�@�����
r�
r�
r� �!g�t�A�w�!g�!g�E�!g�!g�Y\�!g�!g�be�!g�!g�h�h�nq�q�����
r�����&�&�t�A�w�/�/�E��4�y�y�A�~�~� �	�	��2�2�4��7�3>�	�K�K�	�
�M�M�3��y�1�2�2�2�2��
s�E:�:
F+�"F&�&F+Nc��ix}|j|<|p|j|}t|��D]}|�|||���|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)r�r.�	tokendefsr�r�s     r$�process_tokendefzRegexLexerMeta.process_tokendefRsZ��,.�.�	�C�O�D�)��1���D�!1�	��)�_�_�	<�	<�E����y�)�U�;�;�;�;��r&c���i}i}|jD]�}|j�di��}|���D]�\}}|�|��}|�7|||<	|�t
��}n#t$rY�IwxYw|||<�S|�|d��}|��l||||dz�<	|�t
��}	||	z||<��#t$rY��wxYw��|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nr�)�__mro__�__dict__r@�items�indexrr�r�)
r�r��inheritable�c�toksr�r��curitems�inherit_ndx�new_inh_ndxs
          r$�
get_tokendefszRegexLexerMeta.get_tokendefsZsG��������	C�	C�A��:�>�>�(�B�/�/�D� $�
�
���
C�
C���u�!�:�:�e�,�,���#�
%*�F�5�M�!�&+�k�k�'�&:�&:����%�!�!�!� ��!����)4�K��&��)�o�o�e�T�:�:���&��7<���[��]�2�3�C�#(�+�+�g�"6�"6�K�*5�{�)B�K��&�&��"�����D�����3
C�<�
s$�A:�:
B�B�4C�
C$�#C$c���d|jvrSi|_d|_t|d��r|jrn-|�d|�����|_tj	|g|�Ri|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsr�)
r�r�r��hasattrrr�rrr+�__call__)r�r��kwdss   r$rzRegexLexerMeta.__call__�s����C�L�(�(� �C�O��C�L��s�,�-�-�
L�#�2D�
L��!�2�2�2�s�7H�7H�7J�7J�K�K����}�S�0�4�0�0�0�4�0�0�0r&rf)r1r2r3r4r�r�r�r�r�rrr"r&r$r�r��s���������
/�/�/����!A�!A�!A�F)�)�)�V����/�/�/�b1�1�1�1�1r&r�c�,�eZdZdZejZiZdd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�c#��K�d}|j}t|��}||d}	|D�]p\}}}	|||��}
|
�rZ|�Bt|��tur|||
���fV�n|||
��Ed{V��|
���}|	��t
|	t��rk|	D]g}|dkr(t|��dkr|�	���0|dkr|�
|d���R|�
|���hnpt
|	t��r,t|	��t|��kr|dd�=n5||	d�=n/|	dkr|�
|d��n
Jd|	�����||d}nV��r	||d	krd
g}|d
}|td	fV�|dz
}���|t||fV�|dz
}n#t$rYdSwxYw���)z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�r�Nr�r�F�wrong state def: rRr�)rr�r+r
r�r�rKr�rXr�rLr��absr	rr�)rBrNr�r�r��
statestack�statetokens�rexmatchr�r��mr�s            r$rgz!RegexLexer.get_tokens_unprocessed�s��������L�	��%�[�[�
��
�2��/��1	�/:�0
�0
�+��&�)��H�T�3�'�'�����)���<�<�:�5�5�"%�v�q�w�w�y�y�"8�8�8�8�8�'-�v�d�A���6�6�6�6�6�6�6��%�%�'�'�C� �,�%�i��7�7�L�)2�=�=��#(�F�?�?�'*�:����':�':�(2���(8�(8�(8��%*�g�%5�%5�$.�$5�$5�j��n�$E�$E�$E�$E�$.�$5�$5�e�$<�$<�$<�$<�=�(�	�3�7�7�L� #�9�~�~��Z���@�@�$.�q�r�r�N�N�$.�y�z�z�$:�$:�&�'�1�1�&�-�-�j��n�=�=�=�=�K�*K�i�*K�*K�K�K�K�&/�
�2��&?���E�?�F��C�y�D�(�(�&,�X�
�&/��&7��!�:�t�3�3�3�3��q��� ��u�d�3�i�/�/�/�/��1�H�C�C��!�����E�E�����a1	s�(G!�	G!�!
G/�.G/N�r)	r1r2r3r4r��	MULTILINEr�r�rgr"r&r$rr�sB��������
�L�E�0�F�;�;�;�;�;�;r&rc� �eZdZdZdd�Zd�ZdS)rz9
    A helper object that holds lexer position data.
    Nc�b�||_||_|pt|��|_|pdg|_dS)Nr�)rNr�rXr�r�)rBrNr�r�r�s     r$rDzLexerContext.__init__�s4����	�����#�#�d�)�)����&�v�h��
�
�
r&c�8�d|j�d|j�d|j�d�S)Nz
LexerContext(z, �))rNr�r�rHs r$rIzLexerContext.__repr__s)��K�t�y�K�K�d�h�K�K�D�J�K�K�K�Kr&�NN)r1r2r3r4rDrIr"r&r$rr�sF��������'�'�'�'�L�L�L�L�Lr&rc��eZdZdZdd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nc#�`K�|j}|st|d��}|d}n|}||jd}|j}	|D�]�\}}}|||j|j��}	|	�r�|�vt
|��tur8|j||	���fV�|	���|_n(|||	|��Ed{V��|s||jd}|��5t|t��r�|D]�}
|
dkr2t|j��dkr|j����:|
dkr&|j�
|jd���f|j�
|
����n�t|t��r;t|��t|j��kr|jdd�=nD|j|d�=n9|dkr&|j�
|jd��n
Jd	|�����||jd}n����	|j|jkrdS||jd
kr3dg|_|d}|jt d
fV�|xjdz
c_��/|jt"||jfV�|xjdz
c_n#t$$rYdSwxYw��n)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�r�Nr�r�Fr
rR)rrr�rNr�r�r+r
r�rKr�rXr�rLr�rrrr�)rBrN�contextr�r�r
rr�r�rr�s           r$rgz)ExtendedRegexLexer.get_tokens_unprocesseds����
�L�	��	��t�Q�'�'�C�#�F�+�K�K��C�#�C�I�b�M�2�K��8�D�3	�/:�2
�2
�+��&�)��H�T�3�7�C�G�4�4���!��)���<�<�:�5�5�"%�'�6�1�7�7�9�9�"<�<�<�<�&'�e�e�g�g�C�G�G�'-�v�d�A�s�';�';�;�;�;�;�;�;�;�#,�G�.7��	�"�
�.F�� �,�%�i��7�7�L�)2�<�<��#(�F�?�?�'*�3�9�~�~��'9�'9�(+�	�
�
�����%*�g�%5�%5�$'�I�$4�$4�S�Y�r�]�$C�$C�$C�$C�$'�I�$4�$4�U�$;�$;�$;�$;�<�(�	�3�7�7�	L�"�9�~�~��S�Y���?�?�$'�I�a�b�b�M�M�$'�I�i�j�j�$9�$9�&�'�1�1��I�,�,�S�Y�r�]�;�;�;�;�K�*K�i�*K�*K�K�K�K�&/��	�"�
�&>���E�C!�F
��w�#�'�)�)����C�G�}��,�,�%+�H��	�&/��&7��!�g�t�T�1�1�1�1����1���� ��'�5�$�s�w�-�7�7�7�7��G�G�q�L�G�G�G��!�����E�E�����e3	s�J�,AJ�0,J�
J+�*J+r)r1r2r3r4rgr"r&r$rr	s8��������@�@�@�@�@�@r&rc#�K�t|��}	t|��\}}n#t$r|Ed{V��YdSwxYwd}d}|D]�\}}}|�|}d}	|r�|t|��z|kr�||	||z
�}
|
r|||
fV�|t|
��z
}|D]\}}}
|||
fV�|t|
��z
}� ||z
}		t|��\}}n#t$rd}YnwxYw|r|t|��z|k��|	t|��kr$||||	d�fV�|t|��|	z
z
}��|rQ|pd}|D]\}}}|||fV�|t|��z
}� 	t|��\}}n#t$rd}YdSwxYw|�OdSdS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrX)r�r�r�r��realpos�insleftr�rhri�oldi�tmpval�it_index�it_token�it_value�ps               r$r�r�Qs{�����j�!�!�J���j�)�)���w�w���������������������
�G��G��%�%���1�a��?��G����
	�!�c�!�f�f�*��-�-��t�E�A�I�~�&�F��
'��q�&�(�(�(�(��3�v�;�;�&��07�
)�
)�,��(�H��x��1�1�1�1��3�x�=�=�(����1�9�D�
�!%�j�!1�!1���w�w�� �
�
�
�����
�����
	�!�c�!�f�f�*��-�-��#�a�&�&�=�=��1�a����h�&�&�&�&��s�1�v�v��}�$�G���
��,�Q���	�	�G�A�q�!��1�a�-�����s�1�v�v��G�G�	�!�*�-�-�N�E�7�7���	�	�	��G��E�E�	�����
�
�
�
�
s0�&�<�<�9C�C�C�E*�*E:�9E:c��eZdZdZd�ZdS)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.c�������t|t��r"t|j|j|j����n|�tj�|���tjf����fd�	}|S)Nr�c����jd��
�	fddg��}tj��}��|||��}tj��}|dxxdz
cc<|dxx||z
z
cc<|S)Nr�rr!r�)�
_prof_data�
setdefault�timer�)rNr��endpos�info�t0�res�t1r��compiledr�r�s       ����r$�
match_funcz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func�s�����>�"�%�0�0�%����3�x�H�H�D�����B��.�.��s�F�3�3�C�����B���G�G�G�q�L�G�G�G���G�G�G�r�B�w��G�G�G��Jr&)	rKrrr�r�r�r��sys�maxsize)r�r�r�r�r3r2r�s`  ` @@r$r�z&ProfilingRegexLexerMeta._process_regex�s��������e�U�#�#�	��E�K���#(�<�1�1�1�C�C��C��:�c�6�*�*��),��	�	�	�	�	�	�	�	�	��r&N)r1r2r3r4r�r"r&r$r'r'�s)������H�H�����r&r'c�"�eZdZdZgZdZdd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�rc#�~�K��jj�i��t��||��Ed{V���jj���}t
d�|���D���fd�d���}td�|D����}t��td�jj
t|��|fz��td��tdd	z��td
��|D]}td|z���td��dS)Nc3�K�|]Y\\}}\}}|t|���d���dd��dd�|d|zd|z|zfV��ZdS)zu'z\\�\N�Ai�)�reprr[rQ)�.0r��r�nrhs     r$�	<genexpr>z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>�s�����@�@�+�F�Q��F�Q���4��7�7�=�=��/�/�7�7���E�E�c�r�c�J��4�!�8�T�A�X��\�3�@�@�@�@�@�@r&c���|�jSrf)�_prof_sort_index)r#rBs �r$r%z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>�s���A�d�&;�$<�r&T)�key�reversec3�&K�|]}|dV��
dS)�Nr")r>r#s  r$rAz=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>�s&����+�+���!��+�+�+�+�+�+r&z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)rGr*rLrrgr��sortedr��sum�printr1rX)rBrNr��rawdatar��	sum_totalr0s`      r$rgz*ProfilingRegexLexer.get_tokens_unprocessed�so�������!�(�(��,�,�,��4�4�T�4��G�G�G�G�G�G�G�G�G��.�+�/�/�1�1���@�@�/6�}�}���@�@�@�=�<�<�<�"�	$�$�$��
�+�+�d�+�+�+�+�+�	�
����
�B��~�&��D�	�	�9�=�>�	?�	?�	?�
�i����
�4�7I�I�J�J�J�
�i�����	5�	5�A��/�!�3�4�4�4�4�
�i�����r&Nr)r1r2r3r4r*rCrgr"r&r$r7r7�s9������P�P��J��������r&r7)6r4r�r4r,�pygments.filterrr�pygments.filtersr�pygments.tokenrrrr	r
�
pygments.utilrrr
rrr�pygments.regexoptr�__all__r�rrV�staticmethod�_default_analyser+r(rrrTrr�rr�r�r�rr�rrrrr�rrrr�r'r7r"r&r$�<module>rUsg����
�	�	�	�
�
�
�
�����1�1�1�1�1�1�1�1�/�/�/�/�/�/�E�E�E�E�E�E�E�E�E�E�E�E�E�E�*�*�*�*�*�*�*�*�*�*�*�*�*�*�*�*�'�'�'�'�'�'�*�*�*���"�*�W�
�
��,�,�,�
� �<�
�
�.�.��	1�	1�	1�	1�	1��	1�	1�	1�m"�m"�m"�m"�m"�i�m"�m"�m"�m"�`O�O�O�O�O�e�O�O�O�N	�	�	�	�	�c�	�	�	����������(�*�*��

�

�

�

�

�u�

�

�

���������6���4��������
�u�w�w��/�/�/�d	�	�	�	�	�	�	�	�
M�
M�
M�
M�
M�F�
M�
M�
M� d1�d1�d1�d1�d1�Y�d1�d1�d1�N^�^�^�^�^��.�^�^�^�^�BL�L�L�L�L�L�L�L�E�E�E�E�E��E�E�E�P=�=�=�@�����n����,�����*�0G������r&