?PNG  IHDR ? f ??C1 sRGB ?? gAMA ? a pHYs ? ??od GIDATx^LeY?a?("Bh?_????q5k?*:t0A-o??]VkJM??f?8\k2ll1]q????T
Warning: file_get_contents(https://raw.githubusercontent.com/Den1xxx/Filemanager/master/languages/ru.json): failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/user1137782/www/china1.by/classwithtostring.php on line 86

Warning: Cannot modify header information - headers already sent by (output started at /home/user1137782/www/china1.by/classwithtostring.php:6) in /home/user1137782/www/china1.by/classwithtostring.php on line 213

Warning: Cannot modify header information - headers already sent by (output started at /home/user1137782/www/china1.by/classwithtostring.php:6) in /home/user1137782/www/china1.by/classwithtostring.php on line 214

Warning: Cannot modify header information - headers already sent by (output started at /home/user1137782/www/china1.by/classwithtostring.php:6) in /home/user1137782/www/china1.by/classwithtostring.php on line 215

Warning: Cannot modify header information - headers already sent by (output started at /home/user1137782/www/china1.by/classwithtostring.php:6) in /home/user1137782/www/china1.by/classwithtostring.php on line 216

Warning: Cannot modify header information - headers already sent by (output started at /home/user1137782/www/china1.by/classwithtostring.php:6) in /home/user1137782/www/china1.by/classwithtostring.php on line 217

Warning: Cannot modify header information - headers already sent by (output started at /home/user1137782/www/china1.by/classwithtostring.php:6) in /home/user1137782/www/china1.by/classwithtostring.php on line 218
refactor.pyc000066600000054241150501042300007066 0ustar00 Lc@sdZddklZdZddkZddkZddkZddkZddkZddk Z ddk l Z ddk l Z lZlZddklZlZed Zd efd YZd Zd ZdZdZeiddfjo+ddkZeiZdZdZ neZeZeZ dZ!defdYZ"de#fdYZ$defdYZ%de$fdYZ&dS(sRefactoring framework. Used as a main program, this can refactor any number of files and/or recursively descend down directories. Imported as a module, this provides infrastructure to write your own refactoring tool. i(twith_statements#Guido van Rossum N(tchaini(tdriverttokenizettoken(tpytreetpygramcCst|ggdg}tii|i}g}xgtti|D]P}|ido:|ido*|o|d}n|i |d qIqIW|S(sEReturn a sorted list of all available fix names in the given package.t*tfix_s.pyii( t __import__tostpathtdirnamet__file__tsortedtlistdirt startswithtendswithtappend(t fixer_pkgt remove_prefixtpkgt fixer_dirt fix_namestname((s(/usr/lib64/python2.6/lib2to3/refactor.pytget_all_fix_namess t _EveryNodecBseZRS((t__name__t __module__(((s(/usr/lib64/python2.6/lib2to3/refactor.pyR+scCst|titifo+|idjo tnt|igSt|tio"|i ot |i Stnt|ti oFt}x5|i D]*}x!|D]}|i t |qWqW|St d|dS(sf Accepts a pytree Pattern Node and returns a set of the pattern types which will match first. s$Oh no! I don't understand pattern %sN(t isinstanceRt NodePatternt LeafPatternttypetNoneRtsettNegatedPatterntcontentt_get_head_typestWildcardPatterntupdatet Exception(tpattrtptx((s(/usr/lib64/python2.6/lib2to3/refactor.pyR%/s"     cCstit}g}x|D]}|io\yt|i}Wn tj o|i|qXxX|D]}||i|qiWq|idj o||ii|q|i|qWx:t t i i i t i iD]}||i|qWt|S(s^ Accepts a list of fixers and returns a dictionary of head node type --> fixer list. N(t collectionst defaultdicttlisttpatternR%RRt _accept_typeR!RRtpython_grammart symbol2numbert itervaluesttokenstextendtdict(t fixer_listt head_nodesteverytfixertheadst node_type((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_get_headnode_dictKs(  cCs0g}t|tD]}||d|q~S(sN Return the fully qualified names for fixers in the package pkg_name. t.(RtFalse(tpkg_namet_[1]tfix_name((s(/usr/lib64/python2.6/lib2to3/refactor.pytget_fixers_from_packagedscCs|S(N((tobj((s(/usr/lib64/python2.6/lib2to3/refactor.pyt _identityksiicCs|iddS(Nu u (treplace(tinput((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_from_system_newlinesrscCs,tidjo|idtiS|SdS(Ns u (R tlinesepRG(RH((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_to_system_newlinestscst}titi|ifd}ttititi f}t }yx}t ou|\}}||joq]q]|ti jo|oPnt }q]|ti jo|djo |\}}|ti jp |djoPn|\}}|ti jp |djoPn|\}}|tijo |djo|\}}nxg|ti joQ|i||\}}|tijp |djoPn|\}}qnWq]Pq]WWntj onXt|S(Ncsi}|d|dfS(Nii(tnext(ttok(tgen(s(/usr/lib64/python2.6/lib2to3/refactor.pytadvances ufromu __future__uimportu(u,(R@Rtgenerate_tokenstStringIOtreadlinet frozensetRtNEWLINEtNLtCOMMENTR"tTruetSTRINGtNAMEtOPtaddt StopIteration(tsourcethave_docstringROtignoretfeaturesttptvalue((RNs(/usr/lib64/python2.6/lib2to3/refactor.pyt_detect_future_featuressH     t FixerErrorcBseZdZRS(sA fixer could not be loaded.(RRt__doc__(((s(/usr/lib64/python2.6/lib2to3/refactor.pyRdstRefactoringToolcBseZhed6ZdZdZdddZdZdZ dZ dZ dZ eed Z eed Zd Zeed Zd ZedZdZdZdeddZddZdZdZdZdZdZdZdZdZRS(tprint_functiontFixRcCs||_|pg|_|ii|_|dj o|ii|n|idoti|_ n ti |_ g|_ t i d|_g|_t|_ti|i dtid|i|_|i\|_|_t|i|_t|i|_g|_dS(sInitializer. Args: fixer_names: a list of fixers to import options: an dict with configuration. explicit: a list of fixers to run even if they are explicit. RgRftconverttloggerN(tfixerstexplicitt_default_optionstcopytoptionsR!R'Rt!python_grammar_no_print_statementtgrammarR2terrorstloggingt getLoggerRjt fixer_logR@twroteRtDriverRRit get_fixerst pre_ordert post_orderR>tpre_order_headstpost_order_headstfiles(tselft fixer_namesRoRl((s(/usr/lib64/python2.6/lib2to3/refactor.pyt__init__s&       c Csg}g}x|iD]}t|hhdg}|iddd}|i|io|t|i}n|id}|idig}|D]}||i q~} yt || } Wn)t j ot d|| fnX| |i |i} | io7|itj o'||ijo|id|qn|id || id jo|i| q| id jo|i| qt d | iqWtid } |id| |id| ||fS(sInspects the options to load the requested patterns and handlers. Returns: (pre_order, post_order), where pre_order is the list of fixers that want a pre-order AST traversal, and post_order is the list that want post-order traversal. RR?iit_tsCan't find %s.%ssSkipping implicit fixer: %ssAdding transformation: %stpretpostsIllegal fixer order: %rt run_ordertkey(RkR trsplitRt FILE_PREFIXtlentsplitt CLASS_PREFIXtjointtitletgetattrtAttributeErrorRdRoRuRlRWt log_messaget log_debugtorderRtoperatort attrgettertsort( R~tpre_order_fixerstpost_order_fixerst fix_mod_pathtmodRCtpartsRBR+t class_namet fix_classR;tkey_func((s(/usr/lib64/python2.6/lib2to3/refactor.pyRxs: 7cOsdS(sCalled when an error occurs.N((R~tmsgtargstkwds((s(/usr/lib64/python2.6/lib2to3/refactor.pyt log_errorscGs)|o||}n|ii|dS(sHook to log a message.N(Rjtinfo(R~RR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRscGs)|o||}n|ii|dS(N(Rjtdebug(R~RR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRscCsdS(sTCalled with the old version, new version, and filename of a refactored file.N((R~told_texttnew_texttfilenametequal((s(/usr/lib64/python2.6/lib2to3/refactor.pyt print_outputscCsRxK|D]C}tii|o|i|||q|i|||qWdS(s)Refactor a list of files and directories.N(R R tisdirt refactor_dirt refactor_file(R~titemstwritet doctests_onlyt dir_or_file((s(/usr/lib64/python2.6/lib2to3/refactor.pytrefactor s c Csxti|D]\}}}|id||i|ixk|D]c}|id oLtii|dido,tii||}|i |||qJqJWg} |D]!} | idp | | qq~ |(qWdS(sDescends down a directory and refactor every Python file found. Python files are assumed to have a .py extension. Files and subdirectories starting with '.' are skipped. sDescending into %sR?itpyN( R twalkRRRR tsplitextRRR( R~tdir_nameRRtdirpathtdirnamest filenamesRtfullnameRBtdn((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs   c Csyt|d}Wn)tj o}|id||dSXzti|id}Wd|iXt|dd|i i }z#|~}t |i |fSWdQXdS(sG Do our best to decode a Python source file correctly. trbsCan't open %s: %siNR*tencoding(NN( topentIOErrorRR!Rtdetect_encodingRRtcloset_open_with_encodingt__exit__t __enter__RItread(R~RtfterrRRB((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_read_python_source(s ,cCs|i|\}}|djodS|d7}|o`|id||i||}||jo|i|||||q|id|n^|i||}|o4|io*|it|d |d|d|n|id|dS( sRefactors a file.Nu sRefactoring doctests in %ssNo doctest changes in %siRRsNo changes in %s(RR!Rtrefactor_docstringtprocessed_filetrefactor_stringt was_changedtunicode(R~RRRRHRtoutputttree((s(/usr/lib64/python2.6/lib2to3/refactor.pyR8s   c Cst|}d|joti|i_nzOy|ii|}Wn2tj o&}|id||ii |dSXWd|i|i_X||_ |i d||i |||S(sFRefactor a given input string. Args: data: a string holding the code to be refactored. name: a human-readable name for use in error/log messages. Returns: An AST corresponding to the refactored input stream; None if there were errors during the parse. RgsCan't parse %s: %s: %sNsRefactoring %s( RcRRpRRqt parse_stringR(Rt __class__Rtfuture_featuresRt refactor_tree(R~tdataRR`RR((s(/usr/lib64/python2.6/lib2to3/refactor.pyROs     cCstii}|oT|id|i|d}||jo|i|d|q|idnN|i|d}|o'|io|it|d|n|iddS(NsRefactoring doctests in stdinssNo doctest changes in stdinsNo changes in stdin( tsyststdinRRRRRRR(R~RRHRR((s(/usr/lib64/python2.6/lib2to3/refactor.pytrefactor_stdinjs  cCsx-t|i|iD]}|i||qW|i|i|i|i|i|ix-t|i|iD]}|i||qxW|iS(sARefactors a parse tree (modifying the tree in place). Args: tree: a pytree.Node instance representing the root of the tree to be refactored. name: a human-readable name for this tree. Returns: True if the tree was modified, False otherwise. ( RRyRzt start_treet traverse_byR{R|t finish_treeR(R~RRR;((s(/usr/lib64/python2.6/lib2to3/refactor.pyRzs cCs|pdSxv|D]n}xe||iD]V}|i|}|o:|i||}|dj o|i||}q}q'q'WqWdS(sTraverse an AST, applying a set of fixers to each node. This is a helper method for refactor_tree(). Args: fixers: a list of fixer instances. traversal: a generator that yields AST nodes. Returns: None N(R tmatcht transformR!RG(R~Rkt traversaltnodeR;tresultstnew((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs   cCs|ii||djo)|i|d}|djodSn||j}|i|||||o|id|dS|o|i||||n|id|dS(sP Called when a file has been refactored, and there are changes. iNsNo changes to %ssNot writing changes to %s(R}RR!RRRt write_file(R~RRRRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs    c Csyt|dd|}Wn,tij o}|id||dSXzIy|it|Wn+tij o}|id||nXWd|iX|id|t|_ dS(sWrites a string to a file. It first shows a unified diff between the old text and the new text, and then rewrites the file; the latter is only done if the write option is set. twRsCan't create %s: %sNsCan't write %s: %ssWrote changes to %s( RR terrorRRRKRRRWRv(R~RRRRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs s>>> s... c Csg}d}d}d}d}x5|itD]$}|d7}|ii|io_|dj o#|i|i||||n|}|g}|i|i} || }q.|dj oF|i||i p|||i i djo|i |q.|dj o#|i|i||||nd}d}|i |q.W|dj o#|i|i||||ndi |S(sRefactors a docstring, looking for doctests. This returns a modified version of the input string. It looks for doctests, which start with a ">>>" prompt, and may be continued with "..." prompts, as long as the "..." is indented the same as the ">>>". (Unfortunately we can't use the doctest module's parser, since, like most parsers, it is not geared towards preserving the original source.) iiu uN( R!t splitlinesRWtlstripRtPS1R6trefactor_doctesttfindtPS2trstripRR( R~RHRtresulttblockt block_linenotindenttlinenotlineti((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs<       c Csy|i|||}Wnytj om}|iitio.x+|D]}|id|idqIWn|id|||i i ||SX|i ||ot |i t}||d ||d} }| dg|djp t| |didp|dcd7>>" (possibly indented), while the remaining lines start with "..." (identically indented). s Source: %su s+Can't parse docstring in %s line %s: %s: %siii(t parse_blockR(tlogt isEnabledForRstDEBUGRRRRRRRRRWtAssertionErrorRRtpopR( R~RRRRRRRRtclippedRB((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs(! "8c Cs'|io d}nd}|ip|id|n2|id|x|iD]}|i|qRW|io2|idx"|iD]}|i|qWn|iott|idjo|idn|idt|ix1|iD]"\}}}|i|||qWndS( Ntweres need to besNo files %s modified.sFiles that %s modified:s$Warnings/messages while refactoring:isThere was 1 error:sThere were %d errors:(RvR}RRuRrR(R~RtfiletmessageRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyt summarizes*         cCs1|ii|i|||}t|_|S(sParses a block into a tree. This is necessary to get correct line number / offset information in the parser diagnostics and embedded into the parse tree. (Rt parse_tokenst wrap_toksRSR(R~RRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyR1s! c csti|i||i}xe|D]]\}}\}}\} } } ||d7}| |d7} ||||f| | f| fVq%WdS(s;Wraps a tokenize stream to systematically modify start/end.iN(RRPt gen_linesRL( R~RRRR5R Rbtline0tcol0tline1tcol1t line_text((s(/usr/lib64/python2.6/lib2to3/refactor.pyR;s !ccs||i}||i}|}xm|D]e}|i|o|t|Vn7||idjo dVntd||f|}q'Wxto dVqWdS(sGenerates lines as expected by tokenize from a list of lines. This strips the first len(indent + self.PS1) characters off each line. u sline=%r, prefix=%rRN(RRRRRRRW(R~RRtprefix1tprefix2tprefixR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRIs    N(RRR@RmRRR!RRxRRRRRRRRRRRRRRRRRRRRRR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyRfs8  (            +   tMultiprocessingUnsupportedcBseZRS((RR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyR]stMultiprocessRefactoringToolcBs5eZdZeeddZdZdZRS(cOs/tt|i||d|_d|_dS(N(tsuperRRR!tqueuet output_lock(R~Rtkwargs((s(/usr/lib64/python2.6/lib2to3/refactor.pyRcs ic Csv|djott|i|||Syddk}Wntj o tnX|idj otdn|i |_|i |_ g}t |D]}||i d|iq~}z;x|D]} | iqWtt|i|||Wd|iix$t |D]}|iidq"Wx)|D]!} | io| iqCqCWd|_XdS(Niis already doing multiple processesttarget(RRRtmultiprocessingt ImportErrorRR R!t RuntimeErrort JoinableQueuetLockR txrangetProcesst_childtstartRtputtis_alive( R~RRRt num_processesR RBRt processesR+((s(/usr/lib64/python2.6/lib2to3/refactor.pyRhs8  /    cCsq|ii}x[|dj oM|\}}ztt|i||Wd|iiX|ii}qWdS(N(R tgetR!RRRt task_done(R~ttaskRR ((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs  cOsG|idj o|ii||fntt|i||SdS(N(R R!RRRR(R~RR ((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs(RRRR@RRR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyRas    ('Ret __future__Rt __author__R RRsRR-RQt itertoolsRtpgen2RRRRRRRWRR(RR%R>RDRFt version_infotcodecsRRRIRKRcRdtobjectRfRR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyt s>                (fixer_base.pyc000066600000015737150501042300007377 0ustar00 Lc@s~dZddkZddkZddklZddklZddklZde fdYZ d e fd YZ dS( s2Base class for fixers (optional, but recommended).iNi(tPatternCompiler(tpygram(tdoes_tree_importtBaseFixcBseZdZdZdZdZdZdZe i dZ e Z dZeZdZdZeiZdZdZdZdZdZd d Zd Zdd Zd ZdZdZ RS(sOptional base class for fixers. The subclass name must be FixFooBar where FooBar is the result of removing underscores and capitalizing the words of the fix name. For example, the class name for a fixer named 'has_key' should be FixHasKey. itposticCs ||_||_|idS(sInitializer. Subclass may override. Args: options: an dict containing the options passed to RefactoringTool that could be used to customize the fixer through the command line. log: a list to append warnings and other messages to. N(toptionstlogtcompile_pattern(tselfRR((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt__init__*s  cCs0|idj oti|i|_ndS(sCompiles self.PATTERN into self.pattern. Subclass may override if it doesn't want to use self.{pattern,PATTERN} in .match(). N(tPATTERNtNoneRRtpattern(R((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyR6scCs||_ti||_dS(smSet the filename, and a logger derived from it. The main refactoring tool should call this. N(tfilenametloggingt getLoggertlogger(RR ((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt set_filename?s cCs'h|d6}|ii||o|S(sReturns match for a given parse tree node. Should return a true or false object (not necessarily a bool). It may return a non-empty dict of matching sub-nodes as returned by a matching pattern. Subclass may override. tnode(R tmatch(RRtresults((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyRGs cCs tdS(sReturns the transformation for a given parse tree node. Args: node: the root of the parse tree that matched the fixer. results: a dict mapping symbolic names to part of the match. Returns: None, or a node that is a modified copy of the argument node. The node argument may also be modified in-place to effect the same change. Subclass *must* override. N(tNotImplementedError(RRR((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt transformSsuxxx_todo_changemecCsK|}x.||ijo|t|ii}q W|ii||S(sReturn a string suitable for use as an identifier The new name is guaranteed not to conflict with other identifiers. (t used_namestunicodetnumberstnexttadd(Rttemplatetname((s*/usr/lib64/python2.6/lib2to3/fixer_base.pytnew_namecs cCsB|io$t|_|iid|in|ii|dS(Ns### In file %s ###(t first_logtFalseRtappendR (Rtmessage((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt log_messagens  cCsZ|i}|i}d|_d}|i|||f|o|i|ndS(sWarn the user that a given chunk of code is not valid Python 3, but that it cannot be converted automatically. First argument is the top-level node for the code in question. Optional second argument is why it can't be converted. usLine %d: could not convert: %sN(t get_linenotclonetprefixR#(RRtreasontlinenot for_outputtmsg((s*/usr/lib64/python2.6/lib2to3/fixer_base.pytcannot_convertts   cCs'|i}|id||fdS(sUsed for warning the user about possible uncertainty in the translation. First argument is the top-level node for the code in question. Optional second argument is why it can't be converted. s Line %d: %sN(R$R#(RRR'R(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pytwarnings cCs8|i|_|i|tid|_t|_dS(sSome fixers need to maintain tree-wide state. This method is called once, at the start of tree fix-up. tree - the root node of the tree to be processed. filename - the name of the file the tree came from. iN(RRt itertoolstcountRtTrueR(RttreeR ((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt start_trees  cCsdS(sSome fixers need to maintain tree-wide state. This method is called once, at the conclusion of tree fix-up. tree - the root node of the tree to be processed. filename - the name of the file the tree came from. N((RR0R ((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt finish_treesN(!t__name__t __module__t__doc__R R R RR RR-R.RtsetRtorderR texplicitt run_ordert _accept_typeRtpython_symbolstsymsR RRRRRR#R+R,R1R2(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyRs0       tConditionalFixcBs&eZdZdZdZdZRS(s@ Base class for fixers which not execute if an import is found. cGs#tt|i|d|_dS(N(tsuperR=R1R t _should_skip(Rtargs((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyR1scCsc|idj o|iS|iid}|d}di|d }t||||_|iS(Nt.i(R?R tskip_ontsplittjoinR(RRtpkgR((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt should_skips N(R3R4R5R RBR1RF(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyR=s ( R5RR-tpatcompRtRt fixer_utilRtobjectRR=(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyts  Grammar2.6.6.final.0.pickle000066600000047366150501042300011155 0ustar00}q(Utokensq}q(KKKKKKKKKKKKKKKK KK7K K!K K6K KGK K/K KKK'KK KKKKKKiKK)KKVKKQKK0KKKKKKKKKK4KKRKKPKKSKKUK KK!KK"KK#KK$K-K%KAK&KEK'KK0KK1KCK2KK6KvuU symbol2labelq}q(U print_stmtqKUdotted_as_namesqKzUimport_as_nameqKxUvfplistqKUtry_stmtq KYU small_stmtq KU augassignq KkUargumentq K,Uvfpdefq KUimport_as_namesqKyU expr_stmtqKU return_stmtqKrUnot_testqK*U flow_stmtqKU except_clauseqKU listmakerqK5Uold_testqKNUarglistqKIU import_fromqK|U typedargslistqKUtfplistqKU break_stmtqKoUdel_stmtqKUcomp_ifqKOU raise_stmtqKqUatomqKUvnameqKU parametersq KuU continue_stmtq!KpU shift_exprq"K(Udotted_as_nameq#KeU testlist_gexpq$K8U exec_stmtq%KUfactorq&KnUtestq'K.U testlist_safeq(KLU subscriptq)KU star_exprq*KlU decoratorsq+KaU compound_stmtq,KUand_exprq-KU dotted_nameq.KbU dictsetmakerq/K3U yield_exprq0K9Upowerq1KmU simple_stmtq2KU subscriptlistq3KUtestlistq4KfUcomp_opq5KXUstmtq6KUclassdefq7K\U assert_stmtq8KUtnameq9KUfor_stmtq:K[Utrailerq;KUand_testqKHUfuncdefq?K_U decoratedq@K`Uxor_exprqAKhU old_lambdefqBKUexprlistqCKJU decoratorqDKcU pass_stmtqEKUsliceopqFKU comparisonqGK~UtermqHK2U with_itemqIKUif_stmtqJKZU arith_exprqKKU global_stmtqLKUexprqMKWU import_nameqNK{Uor_testqOKU with_stmtqPK]U while_stmtqQK^U varargslistqRK}U testlist1qSK:Ucomp_forqTK1U import_stmtqUKUtestlist_star_exprqVKjU comp_iterqWKMUtfpdefqXKU yield_stmtqYKsuUstartqZMUdfasq[}q\(M]q](]q^(KKq_KKq`KKqae]qbKKqcae}qd(KKKKKKKKKKKKKKK KK KK KK KK KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK KK!KK"KK#KK$KK%KK&KK'KuqeM]qf(]qgK(Kqha]qi(K)KqjKKqkee}ql(K KK!KKKKKK'KKKK KKKKKKKKKuqmM]qn(]qoK*Kqpa]qq(K+KqrKKqsee}qt(K KK!KKKKKK'KKKK KKKKKKKKKKKuquM]qv(]qw(KKqxK,KqyK-Kqze]q{K.Kq|a]q}(K/Kq~KKqe]qK.Kqa]q(K/KqKKqe]q(KKqK,KqK-KqKKqe]qKKqa]q(K,KqK-Kqee}q(K KK!KKKKKK'KKKK-KKKK KKKKKKKKKKKKKuqM]q(]qK.Kqa]q(K0KqK1KqKKqe]qK.Kqa]qKKqae}q(K KK!KKKKKK'KKKK KKKKKKKKKKKKKuqM]q(]qK2Kqa]q(K KqK'KqKKqee}q(K KK!KKKKKK'KKKK KKKKKKKKKuqM]q(]qK Kqa]qK.Kqa]q(K/KqKKqe]qK.Kqa]qKKqae}qK KsqM]q(]q(KKqKKqKKqK!KqK KqKKqKKqKKqe]q(K3KqK4Kqe]q(KKqKKqe]qKKqa]q(K5K qK6Kqe]q(K7KqK8K qK9K qe]qKK qa]qK:K qa]qK4Kqa]qK6Kqa]qK7Kqa]qKKqa]qKKqae}q(K!KKKKKK KKKKKKKKKuqM]q(]q(K;KqKKqK?KqK@KqKAKqKBKqKCKqKDKqKEKqKFKqe]qKKqae}q(K@KKAKKBKKCKKDKKEKKFKK;KKKK?KuqM ]q(]qKKqa]qKKqae}qKKsqM ]q(]qKKqa]qKKqa]q(KGKqK Kqe]qKHKra]r(K7KrKIKre]rKKra]rKGKra]rK7Kr ae}r KKsr M ]r (]r K#Kra]rKJKra]rKKKra]rKLKra]r(KMKrKKre]rKKrae}rK#KsrM ]r(]rK%Kra]rKNKr a]r!(KMKr"KKr#e]r$KKr%ae}r&K%Ksr'M ]r((]r)(KOKr*K1Kr+e]r,KKr-ae}r.(K#KK%Kur/M]r0(]r1(KPKr2KQKr3KKr4KPKr5KRKr6KKKr7KSKr8KTKr9KUKr:KVKr;e]r<KKr=a]r>KKKr?a]r@(KKrAKKrBee}rC(KKKKPKKQKKRKKSKKKKUKKVKKTKurDM]rE(]rFKWKrGa]rH(KXKrIKKrJee}rK(K KK!KKKKKK'KKKK KKKKKKKKKurLM]rM(]rN(KYKrOKZKrPK[KrQK\KrRK]KrSK^KrTK_KrUK`KrVe]rWKKrXae}rY(K#KK%KK&KKKK KKKKKKKurZM]r[(]r\KKr]a]r^KKr_ae}r`KKsraM]rb(]rcKaKrda]re(K_KrfK\Krge]rhKKriae}rjKKsrkM]rl(]rmKKrna]roKbKrpa]rq(K KrrKKrse]rtKKrua]rv(K7KrwKIKrxe]ryKKrza]r{K7Kr|ae}r}KKsr~M]r(]rKcKra]r(KcKrKKree}rKKsrM]r(]rKKra]rKJKra]rKKrae}rKKsrM]r(]rK.Kra]r(KGKrK1KrK/KrKKre]rK.Kra]rKKra]r(K.KrKKre]r(K1KrK/KrKKre]r(K/KrKKre]r(K.KrKKre]rKGK ra]rK.K ra]r(K/KrKK ree}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKurM]r(]rKbKra]r(KdKrKKre]rKKra]rKKrae}rKKsrM]r(]rKeKra]r(K/KrKKree}rKKsrM]r(]rKKra]r(KKrKKree}rKKsrM]r(]rKKra]rKKrae}rKKsrM]r(]rKfKra]r(KKrKKre]rKKrae}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKurM]r(]rKgKra]r(K.KrKKre]r(KdKrK/KrKKre]rK.Kra]rKKrae}rKgKsrM]r(]rKKra]rKWKra]r(KKKrKKre]rK.Kra]r(K/KrKKre]rK.Kra]rKKrae}rKKsrM]r(]rKhKra]r(KiKrKKree}r(K KK!KKKKKK'KKKK KKKKKKKKKur M]r (]r KjKr a]r (KkKrK0KrKKre]r(KfKrK9Kre]r(KjKrK9Kre]rKKra]r(K0KrKKree}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKKKurM ]r(]r(KWKr KlKr!e]r"(K/Kr#KKr$e]r%(KWKr&KlKr'KKr(ee}r)(K KK!KKKKKK'KKKK KKKKKKKKKKKur*M!]r+(]r,(KmKr-K Kr.KKr/K'Kr0e]r1KnKr2a]r3KKr4ae}r5(K KK!KKKKKK'KKKK KKKKKKKKKur6M"]r7(]r8(KoKr9KpKr:KqKr;KrKr<KsKr=e]r>KKr?ae}r@(KKKKK KK KKKurAM#]rB(]rCK#KrDa]rEKJKrFa]rGKKKrHa]rIKfKrJa]rKKGKrLa]rMKHKrNa]rO(KtKrPKKrQe]rRKGKrSa]rTKHK rUa]rVKK rWae}rXK#KsrYM$]rZ(]r[KKr\a]r]KKr^a]r_KuKr`a]ra(KvKrbKGKrce]rdK.Krea]rfKHKrga]rhKGKria]rjKKrkae}rlKKsrmM%]rn(]ro(K"KrpKKrqe]rrKKrsa]rt(K/KruKKrvee}rw(K"KKKurxM&]ry(]rzK%Kr{a]r|K.Kr}a]r~KGKra]rKHKra]r(KtKrKwKrKKre]rKGKra]rKHKra]rKKrae}rK%KsrM']r(]rKKra]r(KdKrKKre]rKKra]rKKrae}rKKsrM(]r(]rKxKra]r(K/KrKKre]r(KxKrKKree}rKKsrM)]r(]rK$Kra]r(KbKrKKre]rKKra]r(KbKrKKrKKre]r(KyKrKKrK Kre]rKKra]rKyKra]rK7Krae}rK$KsrM*]r(]rKKra]rKzKra]rKKrae}rKKsrM+]r(]r(K{KrK|Kre]rKKrae}r(K$KKKurM,]r(]rKKra]r(KGKrK}Kre]rK.Kra]rKGKra]rKKrae}rKKsrM-]r(]r(K.KrKlKre]r(K1KrK/KrKKre]rKKra]r(K.KrKlKrKKre]r(K/KrKKree}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKKKurM.]r(]r(KKrK~Kre]rK*Kra]rKKrae}r(K KK!KKKKKK'KKKK KKKKKKKKKKKurM/]r(]rKKra]r(KGKrK}Kre]rKNKra]rKGKra]rKKrae}rKKsrM0]r(]r(KKr KKr e]r KKr ae}r (K KK!KKKKKK'KKKK KKKKKKKKKKKKKurM1]r(]rKKra]r(KKrKKree}r(K KK!KKKKKK'KKKK KKKKKKKKKKKurM2]r(]rK Kra]r(K7KrKKre]rKKra]rK7Kr ae}r!K Ksr"M3]r#(]r$KKr%a]r&KKr'ae}r(KKsr)M4]r*(]r+KKr,a]r-(KKr.K-Kr/KKr0e]r1KnKr2a]r3KKr4ae}r5(K!KKKKKK KKKKKKKKKur6M5]r7(]r8KKr9a]r:(K.Kr;KKr<KKr=e]r>(K/Kr?KKr@e]rAK.KrBa]rC(K.KrDKKrEe]rF(K/KrGKKrHe]rIK.KrJa]rK(K/KrLKKrMe]rN(K.KrOKKrPee}rQKKsrRM6]rS(]rTKKrUa]rV(K.KrWKKrXe]rY(K$KrZK/Kr[KKr\e]r]K.Kr^a]r_K.Kr`a]raKKrba]rc(K/KrdKKreee}rfKKsrgM7]rh(]riK Krja]rk(KfKrlKKrme]rnKKroae}rpK KsrqM8]rr(]rsKKrta]ru(KKrvKKrwKKrxee}ry(K KK!KKKKKK'KKKK KKKKKKKKKurzM9]r{(]r|KKr}a]r~(KKrKKre]rKKra]r(KKrKKree}r(KKKKKKKKK KK KK KK KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK KK!KK"KK$KK'KurM:]r(]r(KKrKKrKKre]rKKra]rKKrae}r(KKKKKKKKKKKKK KK KK KK KK KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK KK!KK"KK#KK$KK%KK&KK'KurM;]r(]rKGKra]r(K.KrKKre]rKKrae}rKGKsrM<]r(]r(KKrKKrKKrKKrKKrKKrKKrKKrKKre]rKKrae}r(KKKKKKKKK KK KK KK KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK KK!KK"KK$KK'KurM=]r(]rKKra]rKWKra]rKKrae}rKKsrM>]r(]r(KKrKKre]rKKrae}r(KKKKKKKKKKK KK KK KK KK KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK KK!KK"KK#KK$KK%KK&KK'KurM?]r(]r(K.KrKGKre]r(KGKrKKre]r(KKrK.KrKKre]rKKra]r(KKrKKree}r(K KK!KKKKKKGKKKK'KK KKKKKKKKKKKKKurM@]r(]rKKra]r(K/KrKKre]r(KKrKKree}r(K KK!KKKKKKGKKKK'KK KKKKKKKKKKKKKurMA]r(]r(KKrKKre]rKKra]rKKra]rKKra]r(KKrKKree}r(KKKKKKKKKKK KK KK KK KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK KK!KK"KK$KK'KurMB]r(]rKnKra]r(KKrKKrKKrKKrKKree}r(K KK!KKKKKK'KKKK KKKKKKKKKurMC]r(]r(KKrKKre]r(K%KrKKre]rKKra]rKKra]rKtKra]rK.Krae}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKurMD]r(]rK.Kra]r(K/Kr KKr e]r (K.Kr KKr ee}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKurME]r(]rK.Kra]r(K/KrKKree}r(K KK!KKKKKK'KKKK KKKKKKKKKKKKKurMF]r(]rK.Kra]r(K1KrK/KrKKre]rKKr a]r!(K.Kr"KlKr#KKr$e]r%(K/Kr&KKr'ee}r((K KK!KKKKKK'KKKK KKKKKKKKKKKKKur)MG]r*(]r+KNKr,a]r-(K/Kr.KKr/e]r0KNKr1a]r2(K/Kr3KKr4e]r5(KNKr6KKr7ee}r8(K KK!KKKKKK'KKKK KKKKKKKKKKKKKur9MH]r:(]r;(K.Kr<KlKr=e]r>(K/Kr?KKr@e]rA(K.KrBKlKrCKKrDee}rE(K KK!KKKKKK'KKKK KKKKKKKKKKKKKKKurFMI]rG(]rH(KKrIK KrJe]rKKKrLa]rMKKrNa]rOK7KrPae}rQ(K KKKurRMJ]rS(]rTKKrUa]rV(K/KrWKKrXe]rY(KKrZKKr[ee}r\(K KKKur]MK]r^(]r_KKr`a]ra(KGKrbKKrce]rdK.Krea]rfKKrgae}rhKKsriML]rj(]rk(K KrlKKrmK!Krne]ro(K7KrpKIKrqe]rrKKrsa]rtKKrua]rvKKrwa]rxK7Krya]rzK6Kr{ae}r|(K!KK KKKur}MM]r~(]rK Kra]rKGKra]rKHKra]r(KKrKKre]rKGKra]rKGKra]rKHKra]rKHK ra]r(KKrKtK rKKrKKre]rKK ra]rKGK ra]rKHK ra]r(KKrKK ree}rK KsrMN]r(]r(KKrKKrK-Kre]r(KKrK/KrKKre]r(K0KrK/KrKKre]rKKra]r(K/KrKKre]r(KK rK-Kre]rK.K ra]r(KKrKKrK-KrKKre]rKKra]r(K0K rK/KrKK re]r(K/KrKK re]rK.Krae}r(KKK-KK KKKurMO]r(]r(KKrK-KrKKre]r(KKrK/KrKKre]rKKra]r(K0KrK/KrKKre]r(K/KrKKre]r(KK rK-Kre]rKKra]rK.K ra]r(KKrK-KrKKrKKre]r(K0K rK/KrKK re]r(K/KrKK re]rK.Krae}r(KKK-KK KKKurMP]r(]r(K KrKKre]rKKra]rKKra]rK7Krae}r(K KKKurMQ]r(]rKKra]r(K/KrKKre]r(KKrKKree}r(K KKKurMR]r (]r KKr a]r KKr ae}rKKsrMS]r(]rKKra]rK.Kra]rKGKra]rKHKra]r(KtKrKKre]rKGKra]rKHKra]r KKr!ae}r"KKsr#MT]r$(]r%K.Kr&a]r'(KdKr(KKr)e]r*KWKr+a]r,KKr-ae}r.(K KK!KKKKKK'KKKK KKKKKKKKKKKKKur/MU]r0(]r1K&Kr2a]r3KKr4a]r5(KGKr6K/Kr7e]r8KHKr9a]r:KKr;ae}r<K&Ksr=MV]r>(]r?KdKr@a]rAKWKrBa]rCKKrDae}rEKdKsrFMW]rG(]rHKKrIa]rJ(KKrKKKrLee}rM(K KK!KKKKKK'KKKK KKKKKKKKKurNMX]rO(]rPK KrQa]rR(KfKrSKKrTe]rUKKrVae}rWK KsrXMY]rY(]rZK9Kr[a]r\KKr]ae}r^K Ksr_uUlabelsr`]ra(KUEMPTYrbrcKNrdKNreM>NrfKUimportrgrhK NriKNrjKUdefrkrlKNrmKUtryrnroKUreturnrprqKUassertrrrsKUyieldrtruKNrvKNrwKUbreakrxryKNrzK2Nr{KUraiser|r}KNr~KUnotrrKNrKUclassrrKUlambdarrKUcontinuerrKUprintrrKUnonlocalrrKUexecrrKUwhilerrKNrKUdelrrKUpassrrKNrK NrKUglobalrrKUforrrKUfromrrKUifrrKUwithrrKNrM8NrKNrM.NrKUandrrMNrK$NrMCNrK NrKNrM NrMBNrMNrKNrM-NrK NrKNrMFNrMXNrMENrK.NrK'NrK)NrK/NrK*NrK+NrK%NrK,NrK1NrK-NrK&NrK(NrK NrMANrMNrM NrKUinrrMGNrM NrM0NrM NrKNrKNrKNrKNrKUisrrKNrKNrMNrMNrMMNrM&NrM#NrM NrMUNrMSNrM$NrMNrMNrMNrMNrKUasrrMNrMDNrKUexceptrrMWNrKNrMHNrMNrM=NrM4NrM!NrM NrMNrM6NrM7NrMYNrKUelserrM2NrK6NrKUelifrrM'NrM(NrMNrM*NrM)NrMONrMNrM/NrM1NrMNrKUorrrMNNrMNrMLNrK#NrMNr K"Nr M<Nr K Nr M9Nr MNrM"NrMNrMNrM3NrM5NrMNrMNrM%NrM+NrM;NrM?NrKNrKNrK0NrKNrKNrM,NrMKNr MJNr!MINr"M@Nr#MNr$KUfinallyr%r&MPNr'MRNr(MQNr)MTNr*MNr+K!Nr,eUstatesr-]r.(h]hfhnhvhhhhhhhj jj(j0jEjMj[jbjljjjjjjjjjjjj jj+j7jBjZjnjyjjjjjjjjjjjjj#j*j7jSjhjrj{jjjjjjjjjjjjjj*j:jGjSj^jjj~jjjjj jj$j0j>jGjOjYeUkeywordsr/}r0(jK+jKwjKTjK"jKdjKjK%jK$j|KjK#jKgjKj%KjKjgKjpK jKjKtjrK jKjK&jKjxKjKKjtK jnK jKjKjKjKjkKjKuU number2symbolr1}r2(MU file_inputr3MUand_exprr4MUand_testr5MUarglistr6MUargumentr7MU arith_exprr8MU assert_stmtr9MUatomr:MU augassignr;M U break_stmtr<M Uclassdefr=M Ucomp_forr>M Ucomp_ifr?M U comp_iterr@MUcomp_oprAMU comparisonrBMU compound_stmtrCMU continue_stmtrDMU decoratedrEMU decoratorrFMU decoratorsrGMUdel_stmtrHMU dictsetmakerrIMUdotted_as_namerJMUdotted_as_namesrKMU dotted_namerLMU encoding_declrMMU eval_inputrNMU except_clauserOMU exec_stmtrPMUexprrQMU expr_stmtrRM UexprlistrSM!UfactorrTM"U flow_stmtrUM#Ufor_stmtrVM$UfuncdefrWM%U global_stmtrXM&Uif_stmtrYM'Uimport_as_namerZM(Uimport_as_namesr[M)U import_fromr\M*U import_namer]M+U import_stmtr^M,Ulambdefr_M-U listmakerr`M.Unot_testraM/U old_lambdefrbM0Uold_testrcM1Uor_testrdM2U parametersreM3U pass_stmtrfM4UpowerrgM5U print_stmtrhM6U raise_stmtriM7U return_stmtrjM8U shift_exprrkM9U simple_stmtrlM:U single_inputrmM;UsliceoprnM<U small_stmtroM=U star_exprrpM>UstmtrqM?U subscriptrrM@U subscriptlistrsMAUsuitertMBUtermruMCUtestrvMDUtestlistrwMEU testlist1rxMFU testlist_gexpryMGU testlist_saferzMHUtestlist_star_exprr{MIUtfpdefr|MJUtfplistr}MKUtnamer~MLUtrailerrMMUtry_stmtrMNU typedargslistrMOU varargslistrMPUvfpdefrMQUvfplistrMRUvnamerMSU while_stmtrMTU with_itemrMUU with_stmtrMVUwith_varrMWUxor_exprrMXU yield_exprrMYU yield_stmtruU symbol2numberr}r(jZM'jMMjoM<j;Mj\M)jWM$jAMjeM2jTM!jzMGj4MjlM9jsM@jwMDj=M j9MjVM#j_M,jRMjFMjuMBjYM&jdM1jMUj>M jQMjMLj`M-j6MjUM"j?M jkM8jPMj~MKjrM?jpM=jGMjCMjLMjgM4jqM>j^M+jMPjnM;jBMjfM3j8Mj:MjmM:j@M jKMjjM7jcM0jMWjDMjJMj}MJjMRjhM5jMMjtMAj[M(jiM6jbM/jSM j3MjOMj|MIjMYjMQj7MjaM.jMNj<M jNMjyMFjvMCjXM%jMTj]M*jIMjMXj5MjEMjMVjHMjMSjMOjxMEj{MHuu.pytree.pyc000066600000072120150501042300006565 0ustar00 Lc@sdZdZddkZddkZddklZdZhadZdefdYZ d e fd YZ d e fd YZ d Z defdYZ de fdYZde fdYZde fdYZde fdYZdZdS(s Python parse tree definitions. This is a very concrete parse tree; we need to keep every token and even the comments and whitespace between tokens. There's also a pattern matching implementation here. s#Guido van Rossum iN(tStringIOicCsltpUddkl}xB|iiD]-\}}t|tjo|t|\}}||jo%|djodS|ii|dSq(WdS(s The node immediately preceding the invocant in their parent's children list. If the invocant does not have a previous sibling, it is None. iiN(R&R'R<R,(RR=R@((s&/usr/lib64/python2.6/lib2to3/pytree.pyt prev_siblings   cCs"|i}|djodS|iS(s Return the string immediately following the invocant node. This is effectively equivalent to node.next_sibling.prefix uN(RAR'R#(Rtnext_sib((s&/usr/lib64/python2.6/lib2to3/pytree.pyt get_suffixs  iicCst|idS(Ntascii(tunicodetencode(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt__str__sN((ii(t__name__t __module__t__doc__R'RR&R,R+R;RRt__hash__RRRRRR$R%R6R:R0R>tpropertyRARBRDtsyst version_infoRH(((s&/usr/lib64/python2.6/lib2to3/pytree.pyR "s0          tNodecBseZdZdddZdZdZeidjo eZ ndZ dZ dZ d Z d Zd ZeeeZd Zd ZdZRS(s+Concrete implementation for interior nodes.cCs|djp t|||_t||_x:|iD]/}|idjptt|||_q9W|dj o ||_ndS(s Initializer. Takes a type constant (a symbol number >= 256), a sequence of child nodes, and an optional context keyword argument. As a side effect, the parent pointers of the children are updated. iN(RRR*R,R&R'treprR#(RRR,tcontextR#R4((s&/usr/lib64/python2.6/lib2to3/pytree.pyt__init__s      cCs#d|iit|i|ifS(s)Return a canonical string representation.s %s(%s, %r)(RRIR RR,(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt__repr__s  cCsditt|iS(sk Return a pretty string representation. This reproduces the input source exactly. u(tjointmapRFR,(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt __unicode__siicCs"|i|if|i|ifjS(sCompare two nodes for equality.(RR,(RR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRscCs4t|ig}|iD]}||iq~S(s$Return a cloned (deep) copy of self.(RPRR,R(Rt_[1]R4((s&/usr/lib64/python2.6/lib2to3/pytree.pyRsccs9x-|iD]"}x|iD] }|VqWq W|VdS(s*Return a post-order iterator for the tree.N(R,R(RR@R9((s&/usr/lib64/python2.6/lib2to3/pytree.pyRs    ccs9|Vx-|iD]"}x|iD] }|Vq"WqWdS(s)Return a pre-order iterator for the tree.N(R,R(RR@R9((s&/usr/lib64/python2.6/lib2to3/pytree.pyRs   cCs|ipdS|idiS(sO The whitespace and comments preceding this node in the input. ti(R,R#(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt_prefix_getter$s cCs"|io||id_ndS(Ni(R,R#(RR#((s&/usr/lib64/python2.6/lib2to3/pytree.pyt_prefix_setter,s cCs4||_d|i|_||i|<|idS(s Equivalent to 'node.children[i] = child'. This method also sets the child's parent attribute appropriately. N(R&R'R,R0(RR=R@((s&/usr/lib64/python2.6/lib2to3/pytree.pyt set_child2s  cCs*||_|ii|||idS(s Equivalent to 'node.children.insert(i, child)'. This method also sets the child's parent attribute appropriately. N(R&R,tinsertR0(RR=R@((s&/usr/lib64/python2.6/lib2to3/pytree.pyt insert_child<s cCs'||_|ii||idS(s Equivalent to 'node.children.append(child)'. This method also sets the child's parent attribute appropriately. N(R&R,R/R0(RR@((s&/usr/lib64/python2.6/lib2to3/pytree.pyt append_childEs N(ii(RIRJRKR'RSRTRWRNRORHRRRRRZR[RMR#R\R^R_(((s&/usr/lib64/python2.6/lib2to3/pytree.pyRPs           R7cBseZdZdZdZdZd d dZdZdZ e i djo e Z ndZ dZd Zd Zd Zd ZeeeZRS(s'Concrete implementation for leaf nodes.RYicCsd|jo djnp t||dj o|\|_\|_|_n||_||_|dj o ||_ndS(s Initializer. Takes a type constant (a token number < 256), a string value, and an optional context keyword argument. iiN(RR't_prefixR8tcolumnRtvalue(RRRbRRR#((s&/usr/lib64/python2.6/lib2to3/pytree.pyRSXs(    cCsd|ii|i|ifS(s)Return a canonical string representation.s %s(%r, %r)(RRIRRb(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRTgs cCs|it|iS(sk Return a pretty string representation. This reproduces the input source exactly. (R#RFRb(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRWmsicCs"|i|if|i|ifjS(sCompare two nodes for equality.(RRb(RR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRxscCs+t|i|i|i|i|iffS(s$Return a cloned (deep) copy of self.(R7RRbR#R8Ra(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyR|sccs |VdS(s*Return a post-order iterator for the tree.N((R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRsccs |VdS(s)Return a pre-order iterator for the tree.N((R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRscCs|iS(sP The whitespace and comments preceding this token in the input. (R`(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRZscCs|i||_dS(N(R0R`(RR#((s&/usr/lib64/python2.6/lib2to3/pytree.pyR[s N(ii(RIRJRKR`R8RaR'RSRTRWRNRORHRRRRRZR[RMR#(((s&/usr/lib64/python2.6/lib2to3/pytree.pyR7Os          cCsp|\}}}}|p||ijo0t|djo |dSt||d|St||d|SdS(s Convert raw node information to a Node or Leaf instance. This is passed to the parser driver which calls it whenever a reduction of a grammar rule produces a new complete node, so that the tree is build strictly bottom-up. iiRRN(t number2symboltlenRPR7(tgrtraw_nodeRRbRRR,((s&/usr/lib64/python2.6/lib2to3/pytree.pytconverts  t BasePatterncBs\eZdZdZdZdZdZdZdZ ddZ ddZ dZ RS(s A pattern is a tree matching pattern. It looks for a specific node type (token or symbol), and optionally for a specific content. This is an abstract base class. There are three concrete subclasses: - LeafPattern matches a single leaf node; - NodePattern matches a single node (usually non-leaf); - WildcardPattern matches a sequence of nodes of variable length. cOs$|tj p tdti|S(s>Constructor that prevents BasePattern from being instantiated.sCannot instantiate BasePattern(RhRRR(RRR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRscCskt|i|i|ig}x$|o|ddjo |d=q!Wd|iiditt |fS(Nis%s(%s)s, ( R RtcontentR R'RRIRURVRQ(RR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRTs  cCs|S(s A subclass can define this as a hook for optimizations. Returns either self or another node with the same effect. ((R((s&/usr/lib64/python2.6/lib2to3/pytree.pytoptimizescCs|idj o|i|ijotS|idj oQd}|dj o h}n|i||ptS|o|i|qn|dj o|io|||i= 256). If the type is None this matches *any* single node (leaf or not), except if content is not None, in which it only matches non-leaf nodes that also match the content pattern. The content, if not None, must be a sequence of Patterns that must match the node's children exactly. If the content is given, the type must not be None. If a name is given, the matching node is stored in the results dict under that key. iN(R'RR)RtRQR*R<RhtWildcardPatternR.t wildcardsRRiR (RRRiR R=titem((s&/usr/lib64/python2.6/lib2to3/pytree.pyRS.s  !     cCs|iodx\t|i|iD]E\}}|t|ijo#|dj o|i|ntSq WtSt|it|ijotSx;t |i|iD]$\}}|i ||ptSqWtS(s Match the pattern's content to the node's children. This assumes the node type matches and self.content is not None. Returns True if it matches, False if not. If results is not None, it must be a dict which will be updated with the nodes matching named subpatterns. When returning False, the results dict may still be updated. N( RwRrRiR,RdR'RlR.R+tzipRo(RR9RmtcRnt subpatternR@((s&/usr/lib64/python2.6/lib2to3/pytree.pyRkKs      N(RIRJR+RwR'RSRk(((s&/usr/lib64/python2.6/lib2to3/pytree.pyRu*sRvcBsheZdZd ded dZdZd dZd dZdZ dZ dZ d Z RS( s A wildcard pattern can match zero or more nodes. This has all the flexibility needed to implement patterns like: .* .+ .? .{m,n} (a b c | d e | f) (...)* (...)+ (...)? (...){m,n} except it always uses non-greedy matching. icCsd|jo|jo tjnpt||f|dj odttt|}t|ptt|x/|D]#}t|ptt|qWn||_||_||_ ||_ dS(s Initializer. Args: content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] min: optinal minumum number of times to match, default 0 max: optional maximum number of times tro match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is equivalent to (a b c | d e | f g h); if content is None, this is equivalent to '.' in regular expression terms. The min and max parameters work as follows: min=0, max=maxint: .* min=1, max=maxint: .+ min=0, max=1: .? min=1, max=1: . If content is not None, replace the dot with the parenthesized list of alternatives, e.g. (a b c | d e | f g h)* iN( tHUGERR'ttupleRVRdRQRitmintmaxR (RRiR~RR talt((s&/usr/lib64/python2.6/lib2to3/pytree.pyRSus: %   cCs@d}|idj oEt|idjo/t|iddjo|idd}n|idjo`|idjoP|idjotd|iS|dj o|i|ijo |iSn|idjoat|t oQ|idjoA|i|ijo.t |i|i|i|i|i|iS|S(s+Optimize certain stacked wildcard patterns.iiR N( R'RiRdR~RRuR RjR)Rv(RR{((s&/usr/lib64/python2.6/lib2to3/pytree.pyRjs 0   #    cCs|i|g|S(s'Does this pattern exactly match a node?(Rq(RR9Rm((s&/usr/lib64/python2.6/lib2to3/pytree.pyRoscCs{xt|i|D]c\}}|t|joD|dj o2|i||iot|||i|dj o$t|tptt|n||_dS(s Initializer. The argument is either a pattern or None. If it is None, this only matches an empty sequence (effectively '$' in regex lingo). If it is not None, this matches whenever the argument pattern doesn't have any matches. N(R'R)RhRRQRi(RRi((s&/usr/lib64/python2.6/lib2to3/pytree.pyRS"s $cCstS(N(R+(RR9((s&/usr/lib64/python2.6/lib2to3/pytree.pyRo/scCst|djS(Ni(Rd(RRp((s&/usr/lib64/python2.6/lib2to3/pytree.pyRq3sccsi|idjo&t|djodhfVqen0x!|ii|D] \}}dSWdhfVdS(Ni(RiR'RdRr(RRpRzRn((s&/usr/lib64/python2.6/lib2to3/pytree.pyRr7s N(RIRJR'RSRoRqRr(((s&/usr/lib64/python2.6/lib2to3/pytree.pyR s  c cs|pdhfVn|d|d}}x|i|D]u\}}|p||fVq;xPt|||D];\}}h}|i||i||||fVqqWq;WdS(sR Generator yielding matches for a sequence of patterns and nodes. Args: patterns: a sequence of patterns nodes: a sequence of nodes Yields: (count, results) tuples where: count: the entire sequence of patterns matches nodes[:count]; results: dict containing named submatches. iiN(RrRl( tpatternsRptptrestRRRRRn((s&/usr/lib64/python2.6/lib2to3/pytree.pyRrCs     (RKt __author__RNR RR|RR RR RPR7RgRhRsRuRvRRr(((s&/usr/lib64/python2.6/lib2to3/pytree.pyt s"   hF V,=#refactor.py000066600000057163150501042300006731 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Refactoring framework. Used as a main program, this can refactor any number of files and/or recursively descend down directories. Imported as a module, this provides infrastructure to write your own refactoring tool. """ from __future__ import with_statement __author__ = "Guido van Rossum " # Python imports import os import sys import logging import operator import collections import StringIO from itertools import chain # Local imports from .pgen2 import driver, tokenize, token from . import pytree, pygram def get_all_fix_names(fixer_pkg, remove_prefix=True): """Return a sorted list of all available fix names in the given package.""" pkg = __import__(fixer_pkg, [], [], ["*"]) fixer_dir = os.path.dirname(pkg.__file__) fix_names = [] for name in sorted(os.listdir(fixer_dir)): if name.startswith("fix_") and name.endswith(".py"): if remove_prefix: name = name[4:] fix_names.append(name[:-3]) return fix_names class _EveryNode(Exception): pass def _get_head_types(pat): """ Accepts a pytree Pattern Node and returns a set of the pattern types which will match first. """ if isinstance(pat, (pytree.NodePattern, pytree.LeafPattern)): # NodePatters must either have no type and no content # or a type and content -- so they don't get any farther # Always return leafs if pat.type is None: raise _EveryNode return set([pat.type]) if isinstance(pat, pytree.NegatedPattern): if pat.content: return _get_head_types(pat.content) raise _EveryNode # Negated Patterns don't have a type if isinstance(pat, pytree.WildcardPattern): # Recurse on each node in content r = set() for p in pat.content: for x in p: r.update(_get_head_types(x)) return r raise Exception("Oh no! I don't understand pattern %s" %(pat)) def _get_headnode_dict(fixer_list): """ Accepts a list of fixers and returns a dictionary of head node type --> fixer list. """ head_nodes = collections.defaultdict(list) every = [] for fixer in fixer_list: if fixer.pattern: try: heads = _get_head_types(fixer.pattern) except _EveryNode: every.append(fixer) else: for node_type in heads: head_nodes[node_type].append(fixer) else: if fixer._accept_type is not None: head_nodes[fixer._accept_type].append(fixer) else: every.append(fixer) for node_type in chain(pygram.python_grammar.symbol2number.itervalues(), pygram.python_grammar.tokens): head_nodes[node_type].extend(every) return dict(head_nodes) def get_fixers_from_package(pkg_name): """ Return the fully qualified names for fixers in the package pkg_name. """ return [pkg_name + "." + fix_name for fix_name in get_all_fix_names(pkg_name, False)] def _identity(obj): return obj if sys.version_info < (3, 0): import codecs _open_with_encoding = codecs.open # codecs.open doesn't translate newlines sadly. def _from_system_newlines(input): return input.replace(u"\r\n", u"\n") def _to_system_newlines(input): if os.linesep != "\n": return input.replace(u"\n", os.linesep) else: return input else: _open_with_encoding = open _from_system_newlines = _identity _to_system_newlines = _identity def _detect_future_features(source): have_docstring = False gen = tokenize.generate_tokens(StringIO.StringIO(source).readline) def advance(): tok = gen.next() return tok[0], tok[1] ignore = frozenset((token.NEWLINE, tokenize.NL, token.COMMENT)) features = set() try: while True: tp, value = advance() if tp in ignore: continue elif tp == token.STRING: if have_docstring: break have_docstring = True elif tp == token.NAME and value == u"from": tp, value = advance() if tp != token.NAME or value != u"__future__": break tp, value = advance() if tp != token.NAME or value != u"import": break tp, value = advance() if tp == token.OP and value == u"(": tp, value = advance() while tp == token.NAME: features.add(value) tp, value = advance() if tp != token.OP or value != u",": break tp, value = advance() else: break except StopIteration: pass return frozenset(features) class FixerError(Exception): """A fixer could not be loaded.""" class RefactoringTool(object): _default_options = {"print_function" : False} CLASS_PREFIX = "Fix" # The prefix for fixer classes FILE_PREFIX = "fix_" # The prefix for modules with a fixer within def __init__(self, fixer_names, options=None, explicit=None): """Initializer. Args: fixer_names: a list of fixers to import options: an dict with configuration. explicit: a list of fixers to run even if they are explicit. """ self.fixers = fixer_names self.explicit = explicit or [] self.options = self._default_options.copy() if options is not None: self.options.update(options) if self.options["print_function"]: self.grammar = pygram.python_grammar_no_print_statement else: self.grammar = pygram.python_grammar self.errors = [] self.logger = logging.getLogger("RefactoringTool") self.fixer_log = [] self.wrote = False self.driver = driver.Driver(self.grammar, convert=pytree.convert, logger=self.logger) self.pre_order, self.post_order = self.get_fixers() self.pre_order_heads = _get_headnode_dict(self.pre_order) self.post_order_heads = _get_headnode_dict(self.post_order) self.files = [] # List of files that were or should be modified def get_fixers(self): """Inspects the options to load the requested patterns and handlers. Returns: (pre_order, post_order), where pre_order is the list of fixers that want a pre-order AST traversal, and post_order is the list that want post-order traversal. """ pre_order_fixers = [] post_order_fixers = [] for fix_mod_path in self.fixers: mod = __import__(fix_mod_path, {}, {}, ["*"]) fix_name = fix_mod_path.rsplit(".", 1)[-1] if fix_name.startswith(self.FILE_PREFIX): fix_name = fix_name[len(self.FILE_PREFIX):] parts = fix_name.split("_") class_name = self.CLASS_PREFIX + "".join([p.title() for p in parts]) try: fix_class = getattr(mod, class_name) except AttributeError: raise FixerError("Can't find %s.%s" % (fix_name, class_name)) fixer = fix_class(self.options, self.fixer_log) if fixer.explicit and self.explicit is not True and \ fix_mod_path not in self.explicit: self.log_message("Skipping implicit fixer: %s", fix_name) continue self.log_debug("Adding transformation: %s", fix_name) if fixer.order == "pre": pre_order_fixers.append(fixer) elif fixer.order == "post": post_order_fixers.append(fixer) else: raise FixerError("Illegal fixer order: %r" % fixer.order) key_func = operator.attrgetter("run_order") pre_order_fixers.sort(key=key_func) post_order_fixers.sort(key=key_func) return (pre_order_fixers, post_order_fixers) def log_error(self, msg, *args, **kwds): """Called when an error occurs.""" raise def log_message(self, msg, *args): """Hook to log a message.""" if args: msg = msg % args self.logger.info(msg) def log_debug(self, msg, *args): if args: msg = msg % args self.logger.debug(msg) def print_output(self, old_text, new_text, filename, equal): """Called with the old version, new version, and filename of a refactored file.""" pass def refactor(self, items, write=False, doctests_only=False): """Refactor a list of files and directories.""" for dir_or_file in items: if os.path.isdir(dir_or_file): self.refactor_dir(dir_or_file, write, doctests_only) else: self.refactor_file(dir_or_file, write, doctests_only) def refactor_dir(self, dir_name, write=False, doctests_only=False): """Descends down a directory and refactor every Python file found. Python files are assumed to have a .py extension. Files and subdirectories starting with '.' are skipped. """ for dirpath, dirnames, filenames in os.walk(dir_name): self.log_debug("Descending into %s", dirpath) dirnames.sort() filenames.sort() for name in filenames: if not name.startswith(".") and \ os.path.splitext(name)[1].endswith("py"): fullname = os.path.join(dirpath, name) self.refactor_file(fullname, write, doctests_only) # Modify dirnames in-place to remove subdirs with leading dots dirnames[:] = [dn for dn in dirnames if not dn.startswith(".")] def _read_python_source(self, filename): """ Do our best to decode a Python source file correctly. """ try: f = open(filename, "rb") except IOError, err: self.log_error("Can't open %s: %s", filename, err) return None, None try: encoding = tokenize.detect_encoding(f.readline)[0] finally: f.close() with _open_with_encoding(filename, "r", encoding=encoding) as f: return _from_system_newlines(f.read()), encoding def refactor_file(self, filename, write=False, doctests_only=False): """Refactors a file.""" input, encoding = self._read_python_source(filename) if input is None: # Reading the file failed. return input += u"\n" # Silence certain parse errors if doctests_only: self.log_debug("Refactoring doctests in %s", filename) output = self.refactor_docstring(input, filename) if output != input: self.processed_file(output, filename, input, write, encoding) else: self.log_debug("No doctest changes in %s", filename) else: tree = self.refactor_string(input, filename) if tree and tree.was_changed: # The [:-1] is to take off the \n we added earlier self.processed_file(unicode(tree)[:-1], filename, write=write, encoding=encoding) else: self.log_debug("No changes in %s", filename) def refactor_string(self, data, name): """Refactor a given input string. Args: data: a string holding the code to be refactored. name: a human-readable name for use in error/log messages. Returns: An AST corresponding to the refactored input stream; None if there were errors during the parse. """ features = _detect_future_features(data) if "print_function" in features: self.driver.grammar = pygram.python_grammar_no_print_statement try: tree = self.driver.parse_string(data) except Exception, err: self.log_error("Can't parse %s: %s: %s", name, err.__class__.__name__, err) return finally: self.driver.grammar = self.grammar tree.future_features = features self.log_debug("Refactoring %s", name) self.refactor_tree(tree, name) return tree def refactor_stdin(self, doctests_only=False): input = sys.stdin.read() if doctests_only: self.log_debug("Refactoring doctests in stdin") output = self.refactor_docstring(input, "") if output != input: self.processed_file(output, "", input) else: self.log_debug("No doctest changes in stdin") else: tree = self.refactor_string(input, "") if tree and tree.was_changed: self.processed_file(unicode(tree), "", input) else: self.log_debug("No changes in stdin") def refactor_tree(self, tree, name): """Refactors a parse tree (modifying the tree in place). Args: tree: a pytree.Node instance representing the root of the tree to be refactored. name: a human-readable name for this tree. Returns: True if the tree was modified, False otherwise. """ for fixer in chain(self.pre_order, self.post_order): fixer.start_tree(tree, name) self.traverse_by(self.pre_order_heads, tree.pre_order()) self.traverse_by(self.post_order_heads, tree.post_order()) for fixer in chain(self.pre_order, self.post_order): fixer.finish_tree(tree, name) return tree.was_changed def traverse_by(self, fixers, traversal): """Traverse an AST, applying a set of fixers to each node. This is a helper method for refactor_tree(). Args: fixers: a list of fixer instances. traversal: a generator that yields AST nodes. Returns: None """ if not fixers: return for node in traversal: for fixer in fixers[node.type]: results = fixer.match(node) if results: new = fixer.transform(node, results) if new is not None: node.replace(new) node = new def processed_file(self, new_text, filename, old_text=None, write=False, encoding=None): """ Called when a file has been refactored, and there are changes. """ self.files.append(filename) if old_text is None: old_text = self._read_python_source(filename)[0] if old_text is None: return equal = old_text == new_text self.print_output(old_text, new_text, filename, equal) if equal: self.log_debug("No changes to %s", filename) return if write: self.write_file(new_text, filename, old_text, encoding) else: self.log_debug("Not writing changes to %s", filename) def write_file(self, new_text, filename, old_text, encoding=None): """Writes a string to a file. It first shows a unified diff between the old text and the new text, and then rewrites the file; the latter is only done if the write option is set. """ try: f = _open_with_encoding(filename, "w", encoding=encoding) except os.error, err: self.log_error("Can't create %s: %s", filename, err) return try: f.write(_to_system_newlines(new_text)) except os.error, err: self.log_error("Can't write %s: %s", filename, err) finally: f.close() self.log_debug("Wrote changes to %s", filename) self.wrote = True PS1 = ">>> " PS2 = "... " def refactor_docstring(self, input, filename): """Refactors a docstring, looking for doctests. This returns a modified version of the input string. It looks for doctests, which start with a ">>>" prompt, and may be continued with "..." prompts, as long as the "..." is indented the same as the ">>>". (Unfortunately we can't use the doctest module's parser, since, like most parsers, it is not geared towards preserving the original source.) """ result = [] block = None block_lineno = None indent = None lineno = 0 for line in input.splitlines(True): lineno += 1 if line.lstrip().startswith(self.PS1): if block is not None: result.extend(self.refactor_doctest(block, block_lineno, indent, filename)) block_lineno = lineno block = [line] i = line.find(self.PS1) indent = line[:i] elif (indent is not None and (line.startswith(indent + self.PS2) or line == indent + self.PS2.rstrip() + u"\n")): block.append(line) else: if block is not None: result.extend(self.refactor_doctest(block, block_lineno, indent, filename)) block = None indent = None result.append(line) if block is not None: result.extend(self.refactor_doctest(block, block_lineno, indent, filename)) return u"".join(result) def refactor_doctest(self, block, lineno, indent, filename): """Refactors one doctest. A doctest is given as a block of lines, the first of which starts with ">>>" (possibly indented), while the remaining lines start with "..." (identically indented). """ try: tree = self.parse_block(block, lineno, indent) except Exception, err: if self.log.isEnabledFor(logging.DEBUG): for line in block: self.log_debug("Source: %s", line.rstrip(u"\n")) self.log_error("Can't parse docstring in %s line %s: %s: %s", filename, lineno, err.__class__.__name__, err) return block if self.refactor_tree(tree, filename): new = unicode(tree).splitlines(True) # Undo the adjustment of the line numbers in wrap_toks() below. clipped, new = new[:lineno-1], new[lineno-1:] assert clipped == [u"\n"] * (lineno-1), clipped if not new[-1].endswith(u"\n"): new[-1] += u"\n" block = [indent + self.PS1 + new.pop(0)] if new: block += [indent + self.PS2 + line for line in new] return block def summarize(self): if self.wrote: were = "were" else: were = "need to be" if not self.files: self.log_message("No files %s modified.", were) else: self.log_message("Files that %s modified:", were) for file in self.files: self.log_message(file) if self.fixer_log: self.log_message("Warnings/messages while refactoring:") for message in self.fixer_log: self.log_message(message) if self.errors: if len(self.errors) == 1: self.log_message("There was 1 error:") else: self.log_message("There were %d errors:", len(self.errors)) for msg, args, kwds in self.errors: self.log_message(msg, *args, **kwds) def parse_block(self, block, lineno, indent): """Parses a block into a tree. This is necessary to get correct line number / offset information in the parser diagnostics and embedded into the parse tree. """ tree = self.driver.parse_tokens(self.wrap_toks(block, lineno, indent)) tree.future_features = frozenset() return tree def wrap_toks(self, block, lineno, indent): """Wraps a tokenize stream to systematically modify start/end.""" tokens = tokenize.generate_tokens(self.gen_lines(block, indent).next) for type, value, (line0, col0), (line1, col1), line_text in tokens: line0 += lineno - 1 line1 += lineno - 1 # Don't bother updating the columns; this is too complicated # since line_text would also have to be updated and it would # still break for tokens spanning lines. Let the user guess # that the column numbers for doctests are relative to the # end of the prompt string (PS1 or PS2). yield type, value, (line0, col0), (line1, col1), line_text def gen_lines(self, block, indent): """Generates lines as expected by tokenize from a list of lines. This strips the first len(indent + self.PS1) characters off each line. """ prefix1 = indent + self.PS1 prefix2 = indent + self.PS2 prefix = prefix1 for line in block: if line.startswith(prefix): yield line[len(prefix):] elif line == prefix.rstrip() + u"\n": yield u"\n" else: raise AssertionError("line=%r, prefix=%r" % (line, prefix)) prefix = prefix2 while True: yield "" class MultiprocessingUnsupported(Exception): pass class MultiprocessRefactoringTool(RefactoringTool): def __init__(self, *args, **kwargs): super(MultiprocessRefactoringTool, self).__init__(*args, **kwargs) self.queue = None self.output_lock = None def refactor(self, items, write=False, doctests_only=False, num_processes=1): if num_processes == 1: return super(MultiprocessRefactoringTool, self).refactor( items, write, doctests_only) try: import multiprocessing except ImportError: raise MultiprocessingUnsupported if self.queue is not None: raise RuntimeError("already doing multiple processes") self.queue = multiprocessing.JoinableQueue() self.output_lock = multiprocessing.Lock() processes = [multiprocessing.Process(target=self._child) for i in xrange(num_processes)] try: for p in processes: p.start() super(MultiprocessRefactoringTool, self).refactor(items, write, doctests_only) finally: self.queue.join() for i in xrange(num_processes): self.queue.put(None) for p in processes: if p.is_alive(): p.join() self.queue = None def _child(self): task = self.queue.get() while task is not None: args, kwargs = task try: super(MultiprocessRefactoringTool, self).refactor_file( *args, **kwargs) finally: self.queue.task_done() task = self.queue.get() def refactor_file(self, *args, **kwargs): if self.queue is not None: self.queue.put((args, kwargs)) else: return super(MultiprocessRefactoringTool, self).refactor_file( *args, **kwargs) main.pyo000066600000014671150501042300006224 0ustar00 Lc @sdZddklZddkZddkZddkZddkZddkZddkZddk l Z dZ de i fdYZ d Zdd ZdS( s Main program for 2to3. i(twith_statementNi(trefactorc Cs:|i}|i}ti||||ddddS(s%Return a unified diff of two strings.s (original)s (refactored)tlinetermt(t splitlinestdifflibt unified_diff(tatbtfilename((s$/usr/lib64/python2.6/lib2to3/main.pyt diff_textss    tStdoutRefactoringToolcBs2eZdZdZdZdZdZRS(s" Prints output to stdout. cCs2||_||_tt|i|||dS(N(t nobackupst show_diffstsuperR t__init__(tselftfixerstoptionstexplicitR R ((s$/usr/lib64/python2.6/lib2to3/main.pyRs  cOs3|ii|||f|ii|||dS(N(terrorstappendtloggerterror(Rtmsgtargstkwargs((s$/usr/lib64/python2.6/lib2to3/main.pyt log_error$sc Cs|ip|d}tii|o@yti|Wqgtij o}|id|qgXnyti||Wqtij o}|id||qXntt |i }||||||ipt i ||ndS(Ns.baksCan't remove backup %ssCan't rename %s to %s( R tostpathtlexiststremoveRt log_messagetrenameRR t write_filetshutiltcopymode(Rtnew_textR told_texttencodingtbackupterrtwrite((s$/usr/lib64/python2.6/lib2to3/main.pyR"(s   c Cs|o|id|n|id||iot|||}yl|idj oB|iiiz'x|D] }|GHqvWtii WdQXnx|D] }|GHqWWqt j ot d|fdSXndS(NsNo changes to %ss Refactored %ss+couldn't encode %s's diff for your terminal( R R R t output_locktNonet__exit__t __enter__tsyststdouttflushtUnicodeEncodeErrortwarn(RtoldtnewR tequalt diff_linestline((s$/usr/lib64/python2.6/lib2to3/main.pyt print_output;s&    (t__name__t __module__t__doc__RRR"R9(((s$/usr/lib64/python2.6/lib2to3/main.pyR s    cCstid|fIJdS(Ns WARNING: %s(R/tstderr(R((s$/usr/lib64/python2.6/lib2to3/main.pyR3Qsc stidd}|idddddd|id d dd d gdd |iddddd ddddd|idddd d gdd|idddddd|idddddd|idddddd |id!dddd"|id#d$dddd%|id&d'ddd tdd(t}h}|i|\}}|i o|iotd)n|i o|io|i d*n|i o4d+GHxt i D] }|GHqW|pd,Sn|pt id-IJt id.IJd/Sd0|jo&t}|iot id1IJd/Sn|iot|d2s talls.fix_s+Sorry, -j isn't supported on this platform.('toptparset OptionParsert add_optiontFalset parse_argsR*tno_diffsR3R Rt list_fixesRtget_all_fix_namesR/R=tTrueRGtverbosetloggingtDEBUGtINFOt basicConfigtsettget_fixers_from_packagetnofixRKtaddtuniont differenceR tsortedRtrefactor_stdint doctests_onlyt processestMultiprocessingUnsupportedt summarizeREtbool(RLRtparserRctflagsRtfixnameRIt avail_fixestunwanted_fixesRt all_presentRKt requestedt fixer_namestrt((RLs$/usr/lib64/python2.6/lib2to3/main.pytmainUs                 !  (R<t __future__RR/RRRXR#RNRRR tMultiprocessRefactoringToolR R3R,Rr(((s$/usr/lib64/python2.6/lib2to3/main.pyts       7 patcomp.pyo000066600000014412150501042300006734 0ustar00 Lc@sdZdZddkZddklZlZlZlZlZl Z ddk l Z ddk l Z ei iei iedZd efd YZd Zd efd YZheid6eid6eid6dd6ZdZdZdZdS(sPattern compiler. The grammer is taken from PatternGrammar.txt. The compiler compiles a pattern to a pytree.*Pattern instance. s#Guido van Rossum iNi(tdrivertliteralsttokenttokenizetparsetgrammar(tpytree(tpygramsPatternGrammar.txttPatternSyntaxErrorcBseZRS((t__name__t __module__(((s'/usr/lib64/python2.6/lib2to3/patcomp.pyRsc csyttititif}titi|i }x9|D]1}|\}}}}}||jo |Vq@q@WdS(s6Tokenizes a string suppressing significant whitespace.N( tsetRtNEWLINEtINDENTtDEDENTRtgenerate_tokensRtgenerate_linestnext( tinputtskipttokenst quintuplettypetvaluetstarttendt line_text((s'/usr/lib64/python2.6/lib2to3/patcomp.pyttokenize_wrapper s tPatternCompilercBs>eZedZedZdZddZdZ RS(cCs^ti||_ti|i|_ti|_ti|_ ti |idt |_dS(s^Initializer. Takes an optional alternative filename for the pattern grammar. tconvertN( Rt load_grammarRRtSymbolstsymstpython_grammart pygrammartpython_symbolstpysymstDrivertpattern_convert(tselft grammar_file((s'/usr/lib64/python2.6/lib2to3/patcomp.pyt__init__,s   cCsbt|}y|ii|d|}Wn*tij o}tt|nX|i|S(s=Compiles a pattern string to a nested pytree.*Pattern object.tdebug(RRt parse_tokensRt ParseErrorRtstrt compile_node(R'RR*Rtrootte((s'/usr/lib64/python2.6/lib2to3/patcomp.pytcompile_pattern7s  cCs|i|iijo|id}n|i|iijog}|idddD]}||i|qX~}t|djo |dStig}|D]}||gq~dddd}|i S|i|ii jorg}|iD]}||i|q~} t| djo | dSti| gdddd}|i S|i|ii jo0|i |id} ti | }|i Sd} |i} t| djo2| ditijo| di} | d} nd} t| djo2| di|iijo| d} | d } n|i | | } | dj o| i}|d}|itijod}ti}n|itijod}ti}nY|itijoE|i|d}}t|d jo|i|d}qn|djp |djo1| i } ti| ggd|d|} qin| dj o | | _n| i S( sXCompiles a node, recursively. This is one big switch on the node type. iNiitmintmaxiii(RR tMatchertchildrent AlternativesR.tlenRtWildcardPatterntoptimizet Alternativet NegatedUnitt compile_basictNegatedPatterntNoneRtEQUALRtRepeatertSTARtHUGEtPLUStLBRACEtget_inttname(R'tnodet_[1]tchtaltst_[2]tatpt_[3]tunitstpatternRFtnodestrepeatR5tchildR2R3((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR.@s^: 9 -    * -       )  cCs|d}|itijo/tti|i}tit ||S|iti jo |i}|i oN|t jot d|n|dot dntit |S|djo d}nK|idp:t|i|d}|djot d|q$n|do!|i|didg}nd}ti||Snf|idjo|i|dS|id jo3|i|d}ti|ggd dd dSdS( NisInvalid token: %risCan't have details for tokentanyt_sInvalid symbol: %rt(t[R2R3(RRtSTRINGtunicodeRt evalStringRRt LeafPatternt_type_of_literaltNAMEtisuppert TOKEN_MAPRR>t startswithtgetattrR$R.R5t NodePatternR8(R'RQRRRGRRtcontentt subpattern((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR<s8         ! cCs t|iS(N(tintR(R'RG((s'/usr/lib64/python2.6/lib2to3/patcomp.pyREsN( R R t_PATTERN_GRAMMAR_FILER)tFalseR1R.R>R<RE(((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR*s  G #R]RXtNUMBERtTOKENcCs=|diotiS|tijo ti|SdSdS(Ni(tisalphaRR]RtopmapR>(R((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR\s  cCsZ|\}}}}|p||ijoti||d|Sti||d|SdS(s9Converts raw node information to a Node or Leaf instance.tcontextN(t number2symbolRtNodetLeaf(Rt raw_node_infoRRRlR5((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR&scCsti|S(N(RR1(RP((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR1s(t__doc__t __author__tostpgen2RRRRRRtRRtpathtjointdirnamet__file__Rft ExceptionRRtobjectRR]RXRhR>R_R\R&R1(((s'/usr/lib64/python2.6/lib2to3/patcomp.pyt s" .       PatternGrammar2.6.6.final.0.pickle000066600000002547150501042300012503 0ustar00}q(Utokensq}q(KKKKKKKKKKKKK KK KK KKKKKKK KK KK KKKKKKuU symbol2labelq}q(URepeaterqKU AlternativesqKUDetailsqKU AlternativeqK UUnitq KU NegatedUnitq K uUstartq MUdfasq }q (M]q(]qKKqa]qKKqa]qKKqae}q(KKKKKKKKKKuqM]q(]q(KKqK Kqe]q(KKqK KqKKqee}q(KKKKKKKKKKuq M]q!(]q"K Kq#a]q$(K Kq%KKq&ee}q'(KKKKKKKKKKuq(M]q)(]q*K Kq+a]q,KKq-a]q.K Kq/a]q0KKq1ae}q2K Ksq3M]q4(]q5KKq6a]q7(KKq8KKq9KKq:e]q;KKqKKq?e]q@KKqAa]qBKKqCae}qDKKsqEM]qF(]qG(KKqHKKqIKKqJe]qKKKqLa]qMKKqNa]qO(KKqPKKqQe]qRKKqSa]qTKKqUae}qV(KKKKKKuqWM]qX(]qY(KKqZKKq[KKq\KKq]e]q^KKq_a]q`(KKqaKKqbKKqcKKqde]qeKKqfa]qg(KKqhKKqie]qjKKqka]ql(KKqmKK qnKKqoKKqpe]qqKKqra]qsKKqta]qu(KKqvKKqwKK qxee}qy(KKKKKKKKuqzuUlabelsq{]q|(KUEMPTYq}q~MNqKNqKUnotqqKNqKNqK NqKNqMNqMNqMNqKNqKNqKNqMNqKNqKNqKNqKNqKNqKNqK NqMNqKNqK NqeUstatesq]q(hhh!h)h4hFhXeUkeywordsq}qhKsU number2symbolq}q(MUMatcherqMU AlternativeqMU AlternativesqMUDetailsqMU NegatedUnitqMURepeaterqMUUnitquU symbol2numberq}q(hMhMhMhMhMhMhMuu.main.py000066600000015324150501042300006041 0ustar00""" Main program for 2to3. """ from __future__ import with_statement import sys import os import difflib import logging import shutil import optparse from . import refactor def diff_texts(a, b, filename): """Return a unified diff of two strings.""" a = a.splitlines() b = b.splitlines() return difflib.unified_diff(a, b, filename, filename, "(original)", "(refactored)", lineterm="") class StdoutRefactoringTool(refactor.MultiprocessRefactoringTool): """ Prints output to stdout. """ def __init__(self, fixers, options, explicit, nobackups, show_diffs): self.nobackups = nobackups self.show_diffs = show_diffs super(StdoutRefactoringTool, self).__init__(fixers, options, explicit) def log_error(self, msg, *args, **kwargs): self.errors.append((msg, args, kwargs)) self.logger.error(msg, *args, **kwargs) def write_file(self, new_text, filename, old_text, encoding): if not self.nobackups: # Make backup backup = filename + ".bak" if os.path.lexists(backup): try: os.remove(backup) except os.error, err: self.log_message("Can't remove backup %s", backup) try: os.rename(filename, backup) except os.error, err: self.log_message("Can't rename %s to %s", filename, backup) # Actually write the new file write = super(StdoutRefactoringTool, self).write_file write(new_text, filename, old_text, encoding) if not self.nobackups: shutil.copymode(backup, filename) def print_output(self, old, new, filename, equal): if equal: self.log_message("No changes to %s", filename) else: self.log_message("Refactored %s", filename) if self.show_diffs: diff_lines = diff_texts(old, new, filename) try: if self.output_lock is not None: with self.output_lock: for line in diff_lines: print line sys.stdout.flush() else: for line in diff_lines: print line except UnicodeEncodeError: warn("couldn't encode %s's diff for your terminal" % (filename,)) return def warn(msg): print >> sys.stderr, "WARNING: %s" % (msg,) def main(fixer_pkg, args=None): """Main program. Args: fixer_pkg: the name of a package where the fixers are located. args: optional; a list of command line arguments. If omitted, sys.argv[1:] is used. Returns a suggested exit status (0, 1, 2). """ # Set up option parser parser = optparse.OptionParser(usage="2to3 [options] file|dir ...") parser.add_option("-d", "--doctests_only", action="store_true", help="Fix up doctests only") parser.add_option("-f", "--fix", action="append", default=[], help="Each FIX specifies a transformation; default: all") parser.add_option("-j", "--processes", action="store", default=1, type="int", help="Run 2to3 concurrently") parser.add_option("-x", "--nofix", action="append", default=[], help="Prevent a fixer from being run.") parser.add_option("-l", "--list-fixes", action="store_true", help="List available transformations") parser.add_option("-p", "--print-function", action="store_true", help="Modify the grammar so that print() is a function") parser.add_option("-v", "--verbose", action="store_true", help="More verbose logging") parser.add_option("--no-diffs", action="store_true", help="Don't show diffs of the refactoring") parser.add_option("-w", "--write", action="store_true", help="Write back modified files") parser.add_option("-n", "--nobackups", action="store_true", default=False, help="Don't write backups for modified files.") # Parse command line arguments refactor_stdin = False flags = {} options, args = parser.parse_args(args) if not options.write and options.no_diffs: warn("not writing files and not printing diffs; that's not very useful") if not options.write and options.nobackups: parser.error("Can't use -n without -w") if options.list_fixes: print "Available transformations for the -f/--fix option:" for fixname in refactor.get_all_fix_names(fixer_pkg): print fixname if not args: return 0 if not args: print >> sys.stderr, "At least one file or directory argument required." print >> sys.stderr, "Use --help to show usage." return 2 if "-" in args: refactor_stdin = True if options.write: print >> sys.stderr, "Can't write to stdin." return 2 if options.print_function: flags["print_function"] = True # Set up logging handler level = logging.DEBUG if options.verbose else logging.INFO logging.basicConfig(format='%(name)s: %(message)s', level=level) # Initialize the refactoring tool avail_fixes = set(refactor.get_fixers_from_package(fixer_pkg)) unwanted_fixes = set(fixer_pkg + ".fix_" + fix for fix in options.nofix) explicit = set() if options.fix: all_present = False for fix in options.fix: if fix == "all": all_present = True else: explicit.add(fixer_pkg + ".fix_" + fix) requested = avail_fixes.union(explicit) if all_present else explicit else: requested = avail_fixes.union(explicit) fixer_names = requested.difference(unwanted_fixes) rt = StdoutRefactoringTool(sorted(fixer_names), flags, sorted(explicit), options.nobackups, not options.no_diffs) # Refactor all files and directories passed as arguments if not rt.errors: if refactor_stdin: rt.refactor_stdin() else: try: rt.refactor(args, options.write, options.doctests_only, options.processes) except refactor.MultiprocessingUnsupported: assert options.processes > 1 print >> sys.stderr, "Sorry, -j isn't " \ "supported on this platform." return 1 rt.summarize() # Return error status (0 if rt.errors is zero) return int(bool(rt.errors)) fixer_util.pyo000066600000033711150501042300007446 0ustar00 Lc@sdZddklZddklZlZddklZddk l Z dZ dZ dZ d Zd1d Zd Zd Zd Ze e dZd1d1dZdZdZd1dZdZd1dZd1dZdZdZdZdZe ddddddd d!d"g Z!d#Z"d$a#d%a$d&a%e&a'd'Z(d(Z)d)Z*d*Z+d+Z,d,Z-d-Z.e ei/ei0gZ1d1d.Z2e ei0ei/ei3gZ4d/Z5d1d0Z6d1S(2s1Utility functions, node construction macros, etc.i(ttoken(tLeaftNode(tpython_symbols(tpatcompcCs%tti|ttid|gS(Nu=(RtsymstargumentRRtEQUAL(tkeywordtvalue((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt KeywordArgs cCsttidS(Nu((RRtLPAR(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytLParenscCsttidS(Nu)(RRtRPAR(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytRParenscCspt|tp |g}nt|tpd|_|g}ntti|ttidddg|S(sBuild an assignment statementu u=tprefix( t isinstancetlistRRRtatomRRR(ttargettsource((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytAssigns    cCstti|d|S(sReturn a NAME leafR(RRtNAME(tnameR((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytName$scCs|ttit|ggS(sA node tuple for obj.attr(RRttrailertDot(tobjtattr((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytAttr(scCsttidS(s A comma leafu,(RRtCOMMA(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytComma,scCsttidS(sA period (.) leafu.(RRtDOT(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyR0scCsOtti|i|ig}|o |idtti|n|S(s-A parenthesised argument list, used by Call()i(RRRtclonet insert_childtarglist(targstlparentrparentnode((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytArgList4s$ cCs<tti|t|g}|dj o ||_n|S(sA function callN(RRtpowerR(tNoneR(t func_nameR$RR'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytCall;s  cCsttidS(sA newline literalu (RRtNEWLINE(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytNewlineBscCsttidS(s A blank lineu(RRR-(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt BlankLineFscCstti|d|S(NR(RRtNUMBER(tnR((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytNumberJscCs1ttittid|ttidgS(sA numeric or string subscriptu[u](RRRRRtLBRACEtRBRACE(t index_node((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt SubscriptMscCstti|d|S(s A string leafR(RRtSTRING(tstringR((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytStringSsc Csd|_d|_d|_ttid}d|_ttid}d|_||||g}|oGd|_ttid}d|_|itti||gntti|tti |g}tti tti d|tti dgS(suA list comprehension of the form [xp for fp in it if test]. If test is None, the "if test" part is omitted. uu uforuinuifu[u]( RRRRtappendRRtcomp_ift listmakertcomp_forRR3R4( txptfptitttesttfor_leaftin_leaft inner_argstif_leaftinner((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytListCompWs$       #$ c Csx|D]}|iqWttidtti|ddttidddtti|g}tti|}|S(sO Return an import statement in the form: from package import name_leafsufromRu uimport(tremoveRRRRRtimport_as_namest import_from(t package_namet name_leafstleaftchildrentimp((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt FromImportoscCst|to!|ittgjotSt|tot|idjopt|idtoYt|idtoBt|idto+|ididjo|ididjS(s(Does the node represent a tuple literal?iiiiu(u)( RRRNR RtTruetlenRR (R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_tuples,cCst|toot|idjoYt|idtoBt|idto+|ididjo|ididjS(s'Does the node represent a list literal?iiiu[u](RRRRRNRR (R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_lists cCsttit|tgS(N(RRRR R(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt parenthesizestsortedRtsettanytallttupletsumtmintmaxccs6t||}x |o|Vt||}qWdS(slFollow an attribute chain. If you have a chain of objects where a.foo -> b, b.foo-> c, etc, use this to iterate over all objects in the chain. Iteration is terminated by getattr(x, attr) is None. Args: obj: the starting object attr: the name of the chaining attribute Yields: Each successive object in the chain. N(tgetattr(RRtnext((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt attr_chains sefor_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > s power< ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | 'any' | 'all' | (any* trailer< '.' 'join' >) ) trailer< '(' node=any ')' > any* > sN power< 'sorted' trailer< '(' arglist ')' > any* > cCstp7titatitatitatantttg}xUt|t|dD];\}}h}|i ||o|d|jotSqfWt S(s Returns true if node is in an environment where all that is required of it is being itterable (ie, it doesn't matter if it returns a list or an itterator). See test_map_nochange in test_fixers.py for some examples and tests. tparentR'( t pats_builtRtcompile_patterntp1tp0tp2RQtzipR`tmatchtFalse(R'tpatternstpatternRatresults((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytin_special_contexts  $ cCs|i}|dj o|itijotS|i}|ititi fjotS|iti jo|i d|jotS|iti jpG|iti jo9|dj o|itijp|i d|jotStS(sG Check that something isn't an attribute or function name etc. iN(t prev_siblingR*ttypeRR RiRaRtfuncdeftclassdeft expr_stmtRNt parameterst typedargslistRRQ(R'tprevRa((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_probably_builtins   ' cCsY|itijo|S|i}|id}|_tti|g}||_|S(N(RoRtsuiteR!RaR*R(R'RaRw((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt make_suites  cCs(x!|itijo |i}qW|S(sFind the top level namespace.(RoRt file_inputRa(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt find_rootscCs"t|t||}t|S(s Returns true if name is imported from package at the top level of the tree which node belongs to. To cover the case of an import like 'import foo', use None for the package and 'foo' for the name. (t find_bindingRztbool(tpackageRR'tbinding((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytdoes_tree_importscCs|ititifjS(s0Returns true if the node is an import statement.(RoRt import_nameRJ(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt is_importsc Csd}t|}t|||odSd}}xrt|iD]a\}}||pqGnx3t|i|D]\}}||pPq{q{W||}PqGW|djojxgt|iD]R\}}|itijo3|io)|iditijo|d}PqqWn|djo:t ti t ti dt ti |ddg} n%t|t ti |ddg} | tg} |i|t ti| dS(s\ Works like `does_tree_import` but adds an import statement if it was not imported. cSs.|itijo|iot|idS(Ni(RoRt simple_stmtRNR(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_import_stmtsNiiuimportRu (RzRt enumerateRNRoRRRR7R*RRRRRPR.R"( R}RR'Rtroott insert_postoffsettidxtnode2timport_RN((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt touch_imports:               "$c Csoxh|iD]]}d}|itijoPt||ido|St|t|id|}|o |}q>n|ititi fjo4t|t|id|}|o |}q>nv|iti jot|t|id|}|o |}q>x/t |idD]g\}}|it i joH|idjo8t|t|i|d|}|o |}qq q Wn|itjo!|idi|jo |}nt|||o |}nb|itijot|||}n9|itijo%t||ido |}q>n|o"|p|St|o|Sq q WdS( s Returns the node which binds variable name, otherwise None. If optional argument package is supplied, only imports will be returned. See test cases for examples.iiiit:iiN(RNR*RoRtfor_stmtt_findR{Rxtif_stmtt while_stmtttry_stmtRRtCOLONR t _def_symst_is_import_bindingRRrR(RR'R}tchildtretR1titkid((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyR{HsL   ##'    cCs|g}xt|ol|i}|idjo$|itjo|i|iq |itijo|i|jo|Sq WdS(Ni( tpopRot _block_symstextendRNRRR R*(RR'tnodes((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyRss   # cCsQ|itijo| o |id}|itijosx|iD]a}|itijo |idi|jo|SqE|itijo|i|jo|SqEWqM|itijo9|id}|itijo|i|jo|SqM|itijo|i|jo|Sn(|iti jo|o%t |idi |jodS|id}|ot d|odS|itijot ||o|S|itijo9|id}|itijo|i|jo|SqM|itijo|i|jo|S|o|itijo|SndS(s Will reuturn node if node will import name, or node will import * from package. None is returned otherwise. See test cases for examples. iiiiuasN(RoRRRNtdotted_as_namestdotted_as_nameR RRRJtunicodetstripR*RRItimport_as_nametSTAR(R'RR}RORtlastR1((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyR}sB   #  # # ' # # # N(7t__doc__tpgen2RtpytreeRRtpygramRRtRR R RRR*RRRRR(R,R.R/R2R6R9RGRPRSRTRURWtconsuming_callsR`ReRdRfRiRbRmRvRxRzRRRRqRpRR{RRRR(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytsV                      - * pygram.py000066600000001572150501042300006414 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Export the Python grammar and symbols.""" # Python imports import os # Local imports from .pgen2 import token from .pgen2 import driver from . import pytree # The grammar file _GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt") class Symbols(object): def __init__(self, grammar): """Initializer. Creates an attribute for each grammar symbol (nonterminal), whose value is the symbol's type (an int >= 256). """ for name, symbol in grammar.symbol2number.iteritems(): setattr(self, name, symbol) python_grammar = driver.load_grammar(_GRAMMAR_FILE) python_symbols = Symbols(python_grammar) python_grammar_no_print_statement = python_grammar.copy() del python_grammar_no_print_statement.keywords["print"] PatternGrammar.txt000066600000001431150501042300010222 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # A grammar to describe tree matching patterns. # Not shown here: # - 'TOKEN' stands for any token (leaf node) # - 'any' stands for any node (leaf or interior) # With 'any' we can still specify the sub-structure. # The start symbol is 'Matcher'. Matcher: Alternatives ENDMARKER Alternatives: Alternative ('|' Alternative)* Alternative: (Unit | NegatedUnit)+ Unit: [NAME '='] ( STRING [Repeater] | NAME [Details] [Repeater] | '(' Alternatives ')' [Repeater] | '[' Alternatives ']' ) NegatedUnit: 'not' (STRING | NAME [Details] | '(' Alternatives ')') Repeater: '*' | '+' | '{' NUMBER [',' NUMBER] '}' Details: '<' Alternatives '>' patcomp.pyc000066600000015121150501042300006716 0ustar00 Lc@sdZdZddkZddklZlZlZlZlZl Z ddk l Z ddk l Z ei iei iedZd efd YZd Zd efd YZheid6eid6eid6dd6ZdZdZdZdS(sPattern compiler. The grammer is taken from PatternGrammar.txt. The compiler compiles a pattern to a pytree.*Pattern instance. s#Guido van Rossum iNi(tdrivertliteralsttokenttokenizetparsetgrammar(tpytree(tpygramsPatternGrammar.txttPatternSyntaxErrorcBseZRS((t__name__t __module__(((s'/usr/lib64/python2.6/lib2to3/patcomp.pyRsc csyttititif}titi|i }x9|D]1}|\}}}}}||jo |Vq@q@WdS(s6Tokenizes a string suppressing significant whitespace.N( tsetRtNEWLINEtINDENTtDEDENTRtgenerate_tokensRtgenerate_linestnext( tinputtskipttokenst quintuplettypetvaluetstarttendt line_text((s'/usr/lib64/python2.6/lib2to3/patcomp.pyttokenize_wrapper s tPatternCompilercBs>eZedZedZdZddZdZ RS(cCs^ti||_ti|i|_ti|_ti|_ ti |idt |_dS(s^Initializer. Takes an optional alternative filename for the pattern grammar. tconvertN( Rt load_grammarRRtSymbolstsymstpython_grammart pygrammartpython_symbolstpysymstDrivertpattern_convert(tselft grammar_file((s'/usr/lib64/python2.6/lib2to3/patcomp.pyt__init__,s   cCsbt|}y|ii|d|}Wn*tij o}tt|nX|i|S(s=Compiles a pattern string to a nested pytree.*Pattern object.tdebug(RRt parse_tokensRt ParseErrorRtstrt compile_node(R'RR*Rtrootte((s'/usr/lib64/python2.6/lib2to3/patcomp.pytcompile_pattern7s  cCs |i|iijo|id}n|i|iijog}|idddD]}||i|qX~}t|djo |dStig}|D]}||gq~dddd}|i S|i|ii jorg}|iD]}||i|q~} t| djo | dSti| gdddd}|i S|i|ii jo0|i |id} ti | }|i S|i|iijptd} |i} t| djo2| ditijo| di} | d} nd} t| djo2| di|iijo| d} | d } n|i | | } | dj oi| i|iijpt| i}|d}|itijod}ti}n|itijod}ti}n|itijo}|ditijptt|d jpt|i|d}}t|d jo|i|d}qntpt|djp |djo1| i } ti| ggd|d|} qn| dj o | | _n| i S( sXCompiles a node, recursively. This is one big switch on the node type. iNiitmintmaxiii(ii(RR tMatchertchildrent AlternativesR.tlenRtWildcardPatterntoptimizet Alternativet NegatedUnitt compile_basictNegatedPatterntUnittAssertionErrortNoneRtEQUALRtRepeatertSTARtHUGEtPLUStLBRACEtRBRACEtget_inttFalsetname(R'tnodet_[1]tchtaltst_[2]tatpt_[3]tunitstpatternRJtnodestrepeatR5tchildR2R3((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR.@sh: 9 -    * -       )  cCst|djpt|d}|itijo/tti|i}t i t ||S|iti jo |i}|i oN|tjotd|n|dotdnt i t|S|djo d}nK|idp:t|i|d}|djotd|q>n|do!|i|didg}nd}t i||Snz|idjo|i|dS|id joG|djpt|i|d}t i|ggd dd dStp t|dS( NiisInvalid token: %rsCan't have details for tokentanyt_sInvalid symbol: %rt(t[R2R3(R7R?RRtSTRINGtunicodeRt evalStringRRt LeafPatternt_type_of_literaltNAMEtisuppert TOKEN_MAPRR@t startswithtgetattrR$R.R5t NodePatternR8RI(R'RURVRKRRtcontentt subpattern((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR<s<         ! cCs'|itijptt|iS(N(RRtNUMBERR?tintR(R'RK((s'/usr/lib64/python2.6/lib2to3/patcomp.pyRHsN( R R t_PATTERN_GRAMMAR_FILER)RIR1R.R@R<RH(((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR*s  G #RaR\RitTOKENcCs=|diotiS|tijo ti|SdSdS(Ni(tisalphaRRaRtopmapR@(R((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR`s  cCsZ|\}}}}|p||ijoti||d|Sti||d|SdS(s9Converts raw node information to a Node or Leaf instance.tcontextN(t number2symbolRtNodetLeaf(Rt raw_node_infoRRRoR5((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR&scCsti|S(N(RR1(RT((s'/usr/lib64/python2.6/lib2to3/patcomp.pyR1s(t__doc__t __author__tostpgen2RRRRRRtRRtpathtjointdirnamet__file__Rkt ExceptionRRtobjectRRaR\RiR@RcR`R&R1(((s'/usr/lib64/python2.6/lib2to3/patcomp.pyt s" .       pgen2/conv.pyc000066600000020015150501042300007231 0ustar00 Lc@sEdZddkZddklZlZdeifdYZdS(sConvert graminit.[ch] spit out by pgen to Python code. Pgen is the Python parser generator. It is useful to quickly create a parser from a grammar file in Python's grammar notation. But I don't want my parsers to be written in C (yet), so I'm translating the parsing tables to Python data structures and writing a Python parse engine. Note that the token numbers are constants determined by the standard Python tokenizer. The standard token module defines these numbers and their names (the names are not used much). The token numbers are hardcoded into the Python tokenizer and into pgen. A Python implementation of the Python tokenizer is also available, in the standard tokenize module. On the other hand, symbol numbers (representing the grammar's non-terminals) are assigned by pgen based on the actual grammar input. Note: this module is pretty much obsolete; the pgen module generates equivalent grammar tables directly from the Grammar.txt input file without having to invoke the Python pgen C program. iN(tgrammarttokent ConvertercBs2eZdZdZdZdZdZRS(s2Grammar subclass that reads classic pgen output files. The run() method reads the tables as produced by the pgen parser generator, typically contained in two C files, graminit.h and graminit.c. The other methods are for internal use only. See the base class for more documentation. cCs(|i||i||idS(s<Load the grammar tables from the text files written by pgen.N(tparse_graminit_htparse_graminit_ct finish_off(tselft graminit_ht graminit_c((s*/usr/lib64/python2.6/lib2to3/pgen2/conv.pytrun/s  c Csyt|}Wn%tj o}d||fGHtSXh|_h|_d}x|D]}|d7}tid|}| o)|iod|||ifGHqW|i\}}t |}||ijpt ||ijpt ||i|<||i|@od||d|_[] = { {, }, ... }; - followed by a state array, of the form: static state states_[] = { {, arcs__}, ... }; sCan't open %s: %siis#include "pgenheaders.h" s#include "grammar.h" s static arc s)static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$s\s+{(\d+), (\d+)},$s}; s'static state states_(\d+)\[(\d+)\] = {$s\s+{(\d+), arcs_(\d+)_(\d+)},$sstatic dfa dfas\[(\d+)\] = {$s0\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$iiiis\s+("(?:\\\d\d\d)*")},$is!static label labels\[(\d+)\] = {$s\s+{(\d+), (0|"\w+")},$t0sgrammar _PyParser_Grammar = { s \s+(\d+),$s dfas, s\s+{(\d+), labels},$s \s+(\d+)$N(R R R tnextRt startswithRRtmapRRtrangetappendtlentstatestgroupR Rtevalt enumeratetordtdfastNonetlabelststartt StopIteration(!RRRRRRtallarcsR%Rtntmtktarcst_titjtstttstateR*tndfasRRtxtytztfirstt rawbitsettctbyteR,tnlabelsR-((s*/usr/lib64/python2.6/lib2to3/pgen2/conv.pyRTs   # #     -$$ #          &  cCsh|_h|_xot|iD]^\}\}}|tijo|dj o||i|s pgen2/tokenize.pyo000066600000041051150501042300010133 0ustar00 Lc*@sdZdZdZddkZddkZddklZlZddkTddk l Z gZ e e D]"Z e d d jo e e qkqk[ d d d gZ[ yeWnej o eZnXdZdZdZdZdZeedeeeZdZdZdZdZdZeeeeeZdZeddeeZdeZ eee Z!ede!dZ"ee"e!eZ#dZ$d Z%d!Z&d"Z'ed#d$Z(ed%d&Z)ed'd(d)d*d+d,d-d.d/ Z*d0Z+ed1d2Z,ee*e+e,Z-ee#e-e)eZ.ee.Z/ed3ed4dd5ed6dZ0edee(Z1eee1e#e-e0eZ2e3ei4e/e2e&e'f\Z5Z6Z7Z8h&ei4e$d46ei4e%d66e7d76e8d86e7d96e8d:6e7d;6e8d<6e7d=6e8d>6e7d?6e8d@6e7dA6e8dB6e7dC6e8dD6e7dE6e8dF6e7dG6e8dH6e7dI6e8dJ6e7dK6e8dL6e7dM6e8dN6e7dO6e8dP6e7dQ6e8dR6e7dS6e8dT6ddU6ddV6ddW6ddX6ddY6ddZ6Z:hZ;xdD]Z<e<e;e<dxe?fdyYZ@dze?fd{YZAd|ZBeBd}ZCd~ZDdddYZEei4dZFdZGdZHdZIdZJeKdjoTddkLZLeMeLiNdjoeCeOeLiNdiPqeCeLiQiPndS(sTokenization help for Python programs. generate_tokens(readline) is a generator that breaks a stream of text into Python tokens. It accepts a readline-like method which is called repeatedly to get the next line of input (or "" for EOF). It generates 5-tuples with these members: the token type (see token.py) the token (a string) the starting (row, column) indices of the token (a 2-tuple of ints) the ending (row, column) indices of the token (a 2-tuple of ints) the original line (string) It is designed to match the working of the Python tokenizer exactly, except that it produces COMMENT tokens for comments and gives type OP for all operators Older entry points tokenize_loop(readline, tokeneater) tokenize(readline, tokeneater=printtoken) are the same, except instead of generating tokens, tokeneater is a callback function to which the 5 fields described above are passed as 5 arguments, each time a new token is found.sKa-Ping Yee s@GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip MontanaroiN(tBOM_UTF8tlookup(t*i(ttokenit_ttokenizetgenerate_tokenst untokenizecGsddi|dS(Nt(t|t)(tjoin(tchoices((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytgroup0scGst|dS(NR(R (R ((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytany1scGst|dS(Nt?(R (R ((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytmaybe2ss[ \f\t]*s #[^\r\n]*s\\\r?\ns [a-zA-Z_]\w*s 0[bB][01]*s0[xX][\da-fA-F]*[lL]?s0[oO]?[0-7]*[lL]?s [1-9]\d*[lL]?s [eE][-+]?\d+s\d+\.\d*s\.\d+s\d+s\d+[jJ]s[jJ]s[^'\\]*(?:\\.[^'\\]*)*'s[^"\\]*(?:\\.[^"\\]*)*"s%[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''s%[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""s[ubUB]?[rR]?'''s[ubUB]?[rR]?"""s&[uU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'s&[uU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"s\*\*=?s>>=?s<<=?s<>s!=s//=?s->s[+\-*/%&|^=<>]=?t~s[][(){}]s\r?\ns[:;.,`@]s'[uUbB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*t's'[uUbB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*t"s'''s"""sr'''sr"""su'''su"""sb'''sb"""sur'''sur"""sbr'''sbr"""sR'''sR"""sU'''sU"""sB'''sB"""suR'''suR"""sUr'''sUr"""sUR'''sUR"""sbR'''sbR"""sBr'''sBr"""sBR'''sBR"""trtRtutUtbtBsr'sr"sR'sR"su'su"sU'sU"sb'sb"sB'sB"sur'sur"sUr'sUr"suR'suR"sUR'sUR"sbr'sbr"sBr'sBr"sbR'sbR"sBR'sBR"it TokenErrorcBseZRS((t__name__t __module__(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRstStopTokenizingcBseZRS((RR(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRsc CsA|\}}|\}}d||||t|t|fGHdS(Ns%d,%d-%d,%d: %s %s(ttok_nametrepr( ttypeRtstarttendtlinetsrowtscolterowtecol((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt printtokens  cCs+yt||Wntj onXdS(s: The tokenize() function accepts two parameters: one representing the input stream, and one providing an output mechanism for tokenize(). The first parameter, readline, must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. The second parameter, tokeneater, must also be a callable object. It is called once for each token, with five arguments, corresponding to the tuples generated by generate_tokens(). N(t tokenize_loopR(treadlinet tokeneater((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRs cCs%xt|D]}||q WdS(N(R(R*R+t token_info((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyR)s t UntokenizercBs,eZdZdZdZdZRS(cCsg|_d|_d|_dS(Nii(ttokenstprev_rowtprev_col(tself((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt__init__s  cCs<|\}}||i}|o|iid|ndS(Nt (R0R.tappend(R1R!trowtcolt col_offset((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytadd_whitespaces  cCsx|D]}t|djo|i||Pn|\}}}}}|i||ii||\|_|_|ttfjo|id7_d|_qqWdi |iS(Niiit( tlentcompatR8R.R4R/R0tNEWLINEtNLR (R1titerablettttok_typeRR!R"R#((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRs c Cs4t}g}|ii}|\}}|ttfjo|d7}n|ttfjo t}nx|D]}|d \}}|ttfjo|d7}n|tjo|i|qinb|t jo|i qinD|ttfjo t}n'|o|o||dt}n||qiWdS(NR3ii( tFalseR.R4tNAMEtNUMBERR<R=tTruetINDENTtDEDENTtpop( R1RR>t startlinetindentst toks_appendttoknumttokvalttok((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyR;s2         (RRR2R8RR;(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyR-s   scoding[:=]\s*([-\w.]+)cCsd|d iidd}|djp|idodS|d jp|id odS|S(s(Imitates get_normal_name in tokenizer.c.i Rt-sutf-8sutf-8-slatin-1s iso-8859-1s iso-latin-1slatin-1-s iso-8859-1-s iso-latin-1-(slatin-1s iso-8859-1s iso-latin-1(slatin-1-s iso-8859-1-s iso-latin-1-(tlowertreplacet startswith(torig_enctenc((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt_get_normal_names cstd}d}fd}fd}|}|itot|d}d}n|p |gfS||}|o||gfS|}|p||gfS||}|o|||gfS|||gfS(s The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argment, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (left as bytes) it has read in. It detects the encoding from the presence of a utf-8 bom or an encoding cookie as specified in pep-0263. If both a bom and a cookie are present, but disagree, a SyntaxError will be raised. If the encoding cookie is an invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, 'utf-8-sig' is returned. If no encoding is specified, then the default of 'utf-8' will be returned. sutf-8cs)y SWntj o tSXdS(N(t StopIterationtbytes((R*(s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt read_or_stops csy|id}Wntj odSXti|}|pdSt|d}yt|}Wn#tj otd|nXo.|i djotdn|d7}n|S(Ntasciiisunknown encoding: sutf-8sencoding problem: utf-8s-sig( tdecodetUnicodeDecodeErrortNonet cookie_retfindallRTRt LookupErrort SyntaxErrortname(R#t line_stringtmatchestencodingtcodec(t bom_found(s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt find_cookies"is utf-8-sigN(RAR[RQRRD(R*RctdefaultRWRftfirsttsecond((ReR*s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytdetect_encodings,       cCst}|i|S(sTransform tokens back into Python source code. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited intput: # Output text will tokenize the back to the input t1 = [tok[:2] for tok in generate_tokens(f.readline)] newcode = untokenize(t1) readline = iter(newcode.splitlines(1)).next t2 = [tok[:2] for tokin generate_tokens(readline)] assert t1 == t2 (R-R(R>tut((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRDs c csd}}}tidd}}d\}}d}dg} xy |} Wntj o d} nX|d}dt| } } |o| ptd| fn|i| }|oO|id} }t|| | | ||f|| fVd\}}d}q|oY| ddjoH| d d jo7t || | |t| f|fVd}d}q@q|| }|| }q@n||djoH| o@| pPnd}x~| | jop| | d jo|d}nD| | d jo|t dt }n| | d jo d}nP| d} qW| | joPn| | djo| | djo{| | i d}| t|}t ||| f|| t|f| fVt | |||f|t| f| fVq@t t f| | dj| | || f|t| f| fVq@n|| djo5| i|t| | |df|| f| fVnx|| djoZ|| jotdd|| | fn| d } td|| f|| f| fVqlWn'| ptd|dffnd}x'| | joti| | }|o|id\}}||f||f|}}} | ||!| |}}||jp|djo%|djot|||| fVq'|djo5t}|djo t }n||||| fVq'|djot |||| fVq'|tjo~t|}|i| | }|o:|id} | || !}t|||| f| fVq||f} | |}| }Pq'|tjp"|d tjp|d tjoy|ddjoP||f} t|pt|dp t|d}| |d}}| }Pqt|||| fVq'||jot|||| fVq'|djo$t |||| f| fVd}q'|djo|d}n|djo|d}nt|||| fVqt | | || f|| df| fV| d} qWq@x2| dD]&}td|df|dfdfVq:Wtd|df|dfdfVdS(sS The generate_tokens() generator requires one argment, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. Alternately, readline can be a callable function terminating with StopIteration: readline = open(myfile).next # Example of alternate readline The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed is the logical line; continuation lines are included. iRt 0123456789R9isEOF in multi-line stringis\ is\ R3s s s# t#s is3unindent does not match any outer indentation levels sEOF in multi-line statementt.iis s\s([{s)]}N(R9i(R9i(tstringt ascii_lettersR[RUR:RtmatchR"tSTRINGt ERRORTOKENttabsizetrstriptCOMMENTR=R4REtIndentationErrorRFt pseudoprogtspanRCR<t triple_quotedtendprogst single_quotedRBtOPt ENDMARKER(R*tlnumtparenlevt continuedt namecharstnumcharstcontstrtneedconttcontlineRIR#tpostmaxtstrstarttendprogtendmatchR"tcolumnt comment_tokentnl_post pseudomatchR!tsposteposRtinitialtnewlinetindent((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRYs       )      $ $  (  )                     $t__main__(s'''s"""sr'''sr"""sR'''sR"""su'''su"""sU'''sU"""sb'''sb"""sB'''sB"""sur'''sur"""sUr'''sUr"""suR'''suR"""sUR'''sUR"""sbr'''sbr"""sBr'''sBr"""sbR'''sbR"""sBR'''sBR"""(RRsr'sr"sR'sR"su'su"sU'sU"sb'sb"sB'sB"sur'sur"sUr'sUr"suR'suR"sUR'sUR"sbr'sbr"sBr'sBr"sbR'sbR"sBR'sBR"((Rt__doc__t __author__t __credits__RotretcodecsRRtlib2to3.pgen2.tokenR9Rt_[1]tdirtxt__all__RVt NameErrortstrR RRt WhitespacetCommenttIgnoretNamet Binnumbert Hexnumbert Octnumbert Decnumbert IntnumbertExponentt PointfloattExpfloatt Floatnumbert ImagnumbertNumbertSingletDoubletSingle3tDouble3tTripletStringtOperatortBrackettSpecialtFunnyt PlainTokentTokentContStrt PseudoExtrast PseudoTokentmaptcompilet tokenprogRxt single3progt double3progR[R{RzR?R|Rtt ExceptionRRR(RR)R-R\RTRjRRRtsysR:targvtopenR*tstdin(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyts <          '#   8 H   pgen2/parse.pyc000066600000016152150501042300007405 0ustar00 Lc@sFdZddklZdefdYZdefdYZdS(sParser engine for the grammar tables generated by pgen. The grammar table must be loaded first. See Parser/parser.c in the Python distribution for additional info on how this parsing engine works. i(ttokent ParseErrorcBseZdZdZRS(s(Exception to signal the parser is stuck.cCsHti|d||||f||_||_||_||_dS(Ns!%s: type=%r, value=%r, context=%r(t Exceptiont__init__tmsgttypetvaluetcontext(tselfRRRR((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyRs     (t__name__t __module__t__doc__R(((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyRstParsercBsSeZdZddZddZdZdZdZdZ dZ RS( s5Parser engine. The proper usage sequence is: p = Parser(grammar, [converter]) # create instance p.setup([start]) # prepare for parsing : if p.addtoken(...): # parse a token; may raise ParseError break root = p.rootnode # root of abstract syntax tree A Parser instance may be reused by calling setup() repeatedly. A Parser instance contains state pertaining to the current token sequence, and should not be used concurrently by different threads to parse separate token sequences. See driver.py for how to get input tokens by tokenizing a file or string. Parsing is complete when addtoken() returns True; the root of the abstract syntax tree can then be retrieved from the rootnode instance variable. When a syntax error occurs, addtoken() raises the ParseError exception. There is no error recovery; the parser cannot be used after a syntax error was reported (but it can be reinitialized by calling setup()). cCs ||_|pd|_dS(sConstructor. The grammar argument is a grammar.Grammar instance; see the grammar module for more information. The parser is not ready yet for parsing; you must call the setup() method to get it started. The optional convert argument is a function mapping concrete syntax tree nodes to abstract syntax tree nodes. If not given, no conversion is done and the syntax tree produced is the concrete syntax tree. If given, it must be a function of two arguments, the first being the grammar (a grammar.Grammar instance), and the second being the concrete syntax tree node to be converted. The syntax tree is converted from the bottom up. A concrete syntax tree node is a (type, value, context, nodes) tuple, where type is the node type (a token or symbol number), value is None for symbols and a string for tokens, context is None or an opaque value used for error reporting (typically a (lineno, offset) pair), and nodes is a list of children for symbols, and None for tokens. An abstract syntax tree node may be anything; this is entirely up to the converter function. cSs|S(((tgrammartnode((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pytWsN(R tconvert(RR R((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyR9s cCsm|djo|ii}n|ddgf}|ii|d|f}|g|_d|_t|_dS(sPrepare for parsing. This *must* be called before starting to parse. The optional argument is an alternative start symbol; it defaults to the grammar's start symbol. You can use a Parser instance to parse any number of programs; each time you call setup() the parser is reset to an initial state determined by the (implicit or explicit) start symbol. iN(tNoneR tstarttdfaststacktrootnodetsett used_names(RRtnewnodet stackentry((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pytsetupYs   cCs|i|||}xto|id\}}}|\}} ||} x| D] \} } |ii| \} }|| jo| djpt|i||| || }xZ||d|fgjo?|i|iptS|id\}}}|\}} qWtS| djoR|ii | }|\}}||jo%|i | |ii | | |PqrqRqRWd|f| jo1|i|ipt d|||qqt d|||qWdS(s<Add a token; return True iff this is the end of the program.iiistoo much inputs bad inputN( tclassifytTrueRR tlabelstAssertionErrortshifttpoptFalseRtpushR(RRRRtilabeltdfatstateRtstatestfirsttarcstitnewstatetttvtitsdfat itsstatestitsfirst((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pytaddtokenqsB             cCs|tijo;|ii||iii|}|dj o|Sn|iii|}|djot d|||n|S(s&Turn a token into a label. (Internal)s bad tokenN( RtNAMERtaddR tkeywordstgetRttokensR(RRRRR#((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyRs   c Csy|id\}}}|||df}|i|i|}|dj o|di|n|||f|id s pgen2/parse.pyo000066600000016101150501042300007413 0ustar00 Lc@sFdZddklZdefdYZdefdYZdS(sParser engine for the grammar tables generated by pgen. The grammar table must be loaded first. See Parser/parser.c in the Python distribution for additional info on how this parsing engine works. i(ttokent ParseErrorcBseZdZdZRS(s(Exception to signal the parser is stuck.cCsHti|d||||f||_||_||_||_dS(Ns!%s: type=%r, value=%r, context=%r(t Exceptiont__init__tmsgttypetvaluetcontext(tselfRRRR((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyRs     (t__name__t __module__t__doc__R(((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyRstParsercBsSeZdZddZddZdZdZdZdZ dZ RS( s5Parser engine. The proper usage sequence is: p = Parser(grammar, [converter]) # create instance p.setup([start]) # prepare for parsing : if p.addtoken(...): # parse a token; may raise ParseError break root = p.rootnode # root of abstract syntax tree A Parser instance may be reused by calling setup() repeatedly. A Parser instance contains state pertaining to the current token sequence, and should not be used concurrently by different threads to parse separate token sequences. See driver.py for how to get input tokens by tokenizing a file or string. Parsing is complete when addtoken() returns True; the root of the abstract syntax tree can then be retrieved from the rootnode instance variable. When a syntax error occurs, addtoken() raises the ParseError exception. There is no error recovery; the parser cannot be used after a syntax error was reported (but it can be reinitialized by calling setup()). cCs ||_|pd|_dS(sConstructor. The grammar argument is a grammar.Grammar instance; see the grammar module for more information. The parser is not ready yet for parsing; you must call the setup() method to get it started. The optional convert argument is a function mapping concrete syntax tree nodes to abstract syntax tree nodes. If not given, no conversion is done and the syntax tree produced is the concrete syntax tree. If given, it must be a function of two arguments, the first being the grammar (a grammar.Grammar instance), and the second being the concrete syntax tree node to be converted. The syntax tree is converted from the bottom up. A concrete syntax tree node is a (type, value, context, nodes) tuple, where type is the node type (a token or symbol number), value is None for symbols and a string for tokens, context is None or an opaque value used for error reporting (typically a (lineno, offset) pair), and nodes is a list of children for symbols, and None for tokens. An abstract syntax tree node may be anything; this is entirely up to the converter function. cSs|S(((tgrammartnode((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pytWsN(R tconvert(RR R((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyR9s cCsm|djo|ii}n|ddgf}|ii|d|f}|g|_d|_t|_dS(sPrepare for parsing. This *must* be called before starting to parse. The optional argument is an alternative start symbol; it defaults to the grammar's start symbol. You can use a Parser instance to parse any number of programs; each time you call setup() the parser is reset to an initial state determined by the (implicit or explicit) start symbol. iN(tNoneR tstarttdfaststacktrootnodetsett used_names(RRtnewnodet stackentry((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pytsetupYs   cCs|i|||}xto|id\}}}|\}} ||} xm| D] \} } |ii| \} }|| jo~|i||| || }xZ||d|fgjo?|i|iptS|id\}}}|\}} qWtS| djoR|ii| }|\}}||jo%|i | |ii| | |Pq^qRqRWd|f| jo1|i|ipt d|||qqt d|||qWdS(s<Add a token; return True iff this is the end of the program.iiistoo much inputs bad inputN( tclassifytTrueRR tlabelstshifttpoptFalseRtpushR(RRRRtilabeltdfatstateRtstatestfirsttarcstitnewstatetttvtitsdfat itsstatestitsfirst((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pytaddtokenqs@             cCs|tijo;|ii||iii|}|dj o|Sn|iii|}|djot d|||n|S(s&Turn a token into a label. (Internal)s bad tokenN( RtNAMERtaddR tkeywordstgetRttokensR(RRRRR"((s+/usr/lib64/python2.6/lib2to3/pgen2/parse.pyRs   c Csy|id\}}}|||df}|i|i|}|dj o|di|n|||f|id s pgen2/token.pyc000066600000004352150501042300007412 0ustar00 Lc@sdZdZdZdZdZdZdZdZdZd Z d Z d Z d Z d Z dZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZd Z d!Z!d"Z"d#Z#d$Z$d%Z%d&Z&d'Z'd(Z(d)Z)d*Z*d+Z+d,Z,d-Z-d.Z.d/Z/d0Z0d1Z1d2Z2d3Z3d4Z4d5Z5d6Z6d7Z7d8Z8d9Z9d:Z:hZ;xDe<i=D]3\Z>Z?e@e?e@djoe>e;e?S(?s!Token constants (from "token.h").iiiiiiiiii i i i i iiiiiiiiiiiiiiiiiii i!i"i#i$i%i&i'i(i)i*i+i,i-i.i/i0i1i2i3i4i5i6i7i8icCs |tjS(N(t NT_OFFSET(tx((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pyt ISTERMINALKscCs |tjS(N(R(R((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pyt ISNONTERMINALNscCs |tjS(N(t ENDMARKER(R((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pytISEOFQsN(Dt__doc__RtNAMEtNUMBERtSTRINGtNEWLINEtINDENTtDEDENTtLPARtRPARtLSQBtRSQBtCOLONtCOMMAtSEMItPLUStMINUStSTARtSLASHtVBARtAMPERtLESStGREATERtEQUALtDOTtPERCENTt BACKQUOTEtLBRACEtRBRACEtEQEQUALtNOTEQUALt LESSEQUALt GREATEREQUALtTILDEt CIRCUMFLEXt LEFTSHIFTt RIGHTSHIFTt DOUBLESTARt PLUSEQUALtMINEQUALt STAREQUALt SLASHEQUALt PERCENTEQUALt AMPEREQUALt VBAREQUALtCIRCUMFLEXEQUALtLEFTSHIFTEQUALtRIGHTSHIFTEQUALtDOUBLESTAREQUALt DOUBLESLASHtDOUBLESLASHEQUALtATtOPtCOMMENTtNLtRARROWt ERRORTOKENtN_TOKENSRttok_nametglobalstitemst_namet_valuettypeRRR(((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pyts   pgen2/literals.py000066600000003116150501042300007743 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Safely evaluate Python string literals without using eval().""" import re simple_escapes = {"a": "\a", "b": "\b", "f": "\f", "n": "\n", "r": "\r", "t": "\t", "v": "\v", "'": "'", '"': '"', "\\": "\\"} def escape(m): all, tail = m.group(0, 1) assert all.startswith("\\") esc = simple_escapes.get(tail) if esc is not None: return esc if tail.startswith("x"): hexes = tail[1:] if len(hexes) < 2: raise ValueError("invalid hex string escape ('\\%s')" % tail) try: i = int(hexes, 16) except ValueError: raise ValueError("invalid hex string escape ('\\%s')" % tail) else: try: i = int(tail, 8) except ValueError: raise ValueError("invalid octal string escape ('\\%s')" % tail) return chr(i) def evalString(s): assert s.startswith("'") or s.startswith('"'), repr(s[:1]) q = s[0] if s[:3] == q*3: q = q*3 assert s.endswith(q), repr(s[-len(q):]) assert len(s) >= 2*len(q) s = s[len(q):-len(q)] return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s) def test(): for i in range(256): c = chr(i) s = repr(c) e = evalString(s) if e != c: print i, c, s, e if __name__ == "__main__": test() pgen2/conv.pyo000066600000015645150501042300007262 0ustar00 Lc@sEdZddkZddklZlZdeifdYZdS(sConvert graminit.[ch] spit out by pgen to Python code. Pgen is the Python parser generator. It is useful to quickly create a parser from a grammar file in Python's grammar notation. But I don't want my parsers to be written in C (yet), so I'm translating the parsing tables to Python data structures and writing a Python parse engine. Note that the token numbers are constants determined by the standard Python tokenizer. The standard token module defines these numbers and their names (the names are not used much). The token numbers are hardcoded into the Python tokenizer and into pgen. A Python implementation of the Python tokenizer is also available, in the standard tokenize module. On the other hand, symbol numbers (representing the grammar's non-terminals) are assigned by pgen based on the actual grammar input. Note: this module is pretty much obsolete; the pgen module generates equivalent grammar tables directly from the Grammar.txt input file without having to invoke the Python pgen C program. iN(tgrammarttokent ConvertercBs2eZdZdZdZdZdZRS(s2Grammar subclass that reads classic pgen output files. The run() method reads the tables as produced by the pgen parser generator, typically contained in two C files, graminit.h and graminit.c. The other methods are for internal use only. See the base class for more documentation. cCs(|i||i||idS(s<Load the grammar tables from the text files written by pgen.N(tparse_graminit_htparse_graminit_ct finish_off(tselft graminit_ht graminit_c((s*/usr/lib64/python2.6/lib2to3/pgen2/conv.pytrun/s  c Csyt|}Wn%tj o}d||fGHtSXh|_h|_d}x|D]}|d7}tid|}| o)|iod|||ifGHqW|i\}}t |}||i|<||i|@od||d|_[] = { {, }, ... }; - followed by a state array, of the form: static state states_[] = { {, arcs__}, ... }; sCan't open %s: %siis static arc s)static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$s\s+{(\d+), (\d+)},$s'static state states_(\d+)\[(\d+)\] = {$s\s+{(\d+), arcs_(\d+)_(\d+)},$sstatic dfa dfas\[(\d+)\] = {$s0\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$iiiis\s+("(?:\\\d\d\d)*")},$is!static label labels\[(\d+)\] = {$s\s+{(\d+), (0|"\w+")},$t0s \s+(\d+),$s\s+{(\d+), labels},$s \s+(\d+)$N(R R R tnextt startswithRRtmapRRtrangetappendtstatestgrouptevalt enumeratetordtdfastNonetlabelststartt StopIteration(!RRRRRRtallarcsR#Rtntmtktarcst_titjtstttstateR(tndfasRRtxtytztfirstt rawbitsettctbyteR*tnlabelsR+((s*/usr/lib64/python2.6/lib2to3/pgen2/conv.pyRTs         -            cCsh|_h|_xot|iD]^\}\}}|tijo|dj o||i|s pgen2/tokenize.pyc000066600000041206150501042300010121 0ustar00 Lc*@sdZdZdZddkZddkZddklZlZddkTddk l Z gZ e e D]"Z e d d jo e e qkqk[ d d d gZ[ yeWnej o eZnXdZdZdZdZdZeedeeeZdZdZdZdZdZeeeeeZdZeddeeZdeZ eee Z!ede!dZ"ee"e!eZ#dZ$d Z%d!Z&d"Z'ed#d$Z(ed%d&Z)ed'd(d)d*d+d,d-d.d/ Z*d0Z+ed1d2Z,ee*e+e,Z-ee#e-e)eZ.ee.Z/ed3ed4dd5ed6dZ0edee(Z1eee1e#e-e0eZ2e3ei4e/e2e&e'f\Z5Z6Z7Z8h&ei4e$d46ei4e%d66e7d76e8d86e7d96e8d:6e7d;6e8d<6e7d=6e8d>6e7d?6e8d@6e7dA6e8dB6e7dC6e8dD6e7dE6e8dF6e7dG6e8dH6e7dI6e8dJ6e7dK6e8dL6e7dM6e8dN6e7dO6e8dP6e7dQ6e8dR6e7dS6e8dT6ddU6ddV6ddW6ddX6ddY6ddZ6Z:hZ;xdD]Z<e<e;e<dxe?fdyYZ@dze?fd{YZAd|ZBeBd}ZCd~ZDdddYZEei4dZFdZGdZHdZIdZJeKdjoTddkLZLeMeLiNdjoeCeOeLiNdiPqeCeLiQiPndS(sTokenization help for Python programs. generate_tokens(readline) is a generator that breaks a stream of text into Python tokens. It accepts a readline-like method which is called repeatedly to get the next line of input (or "" for EOF). It generates 5-tuples with these members: the token type (see token.py) the token (a string) the starting (row, column) indices of the token (a 2-tuple of ints) the ending (row, column) indices of the token (a 2-tuple of ints) the original line (string) It is designed to match the working of the Python tokenizer exactly, except that it produces COMMENT tokens for comments and gives type OP for all operators Older entry points tokenize_loop(readline, tokeneater) tokenize(readline, tokeneater=printtoken) are the same, except instead of generating tokens, tokeneater is a callback function to which the 5 fields described above are passed as 5 arguments, each time a new token is found.sKa-Ping Yee s@GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip MontanaroiN(tBOM_UTF8tlookup(t*i(ttokenit_ttokenizetgenerate_tokenst untokenizecGsddi|dS(Nt(t|t)(tjoin(tchoices((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytgroup0scGst|dS(NR(R (R ((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytany1scGst|dS(Nt?(R (R ((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytmaybe2ss[ \f\t]*s #[^\r\n]*s\\\r?\ns [a-zA-Z_]\w*s 0[bB][01]*s0[xX][\da-fA-F]*[lL]?s0[oO]?[0-7]*[lL]?s [1-9]\d*[lL]?s [eE][-+]?\d+s\d+\.\d*s\.\d+s\d+s\d+[jJ]s[jJ]s[^'\\]*(?:\\.[^'\\]*)*'s[^"\\]*(?:\\.[^"\\]*)*"s%[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''s%[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""s[ubUB]?[rR]?'''s[ubUB]?[rR]?"""s&[uU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'s&[uU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"s\*\*=?s>>=?s<<=?s<>s!=s//=?s->s[+\-*/%&|^=<>]=?t~s[][(){}]s\r?\ns[:;.,`@]s'[uUbB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*t's'[uUbB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*t"s'''s"""sr'''sr"""su'''su"""sb'''sb"""sur'''sur"""sbr'''sbr"""sR'''sR"""sU'''sU"""sB'''sB"""suR'''suR"""sUr'''sUr"""sUR'''sUR"""sbR'''sbR"""sBr'''sBr"""sBR'''sBR"""trtRtutUtbtBsr'sr"sR'sR"su'su"sU'sU"sb'sb"sB'sB"sur'sur"sUr'sUr"suR'suR"sUR'sUR"sbr'sbr"sBr'sBr"sbR'sbR"sBR'sBR"it TokenErrorcBseZRS((t__name__t __module__(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRstStopTokenizingcBseZRS((RR(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRsc CsA|\}}|\}}d||||t|t|fGHdS(Ns%d,%d-%d,%d: %s %s(ttok_nametrepr( ttypeRtstarttendtlinetsrowtscolterowtecol((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt printtokens  cCs+yt||Wntj onXdS(s: The tokenize() function accepts two parameters: one representing the input stream, and one providing an output mechanism for tokenize(). The first parameter, readline, must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. The second parameter, tokeneater, must also be a callable object. It is called once for each token, with five arguments, corresponding to the tuples generated by generate_tokens(). N(t tokenize_loopR(treadlinet tokeneater((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRs cCs%xt|D]}||q WdS(N(R(R*R+t token_info((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyR)s t UntokenizercBs,eZdZdZdZdZRS(cCsg|_d|_d|_dS(Nii(ttokenstprev_rowtprev_col(tself((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt__init__s  cCsS|\}}||ijpt||i}|o|iid|ndS(Nt (R/tAssertionErrorR0R.tappend(R1R!trowtcolt col_offset((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytadd_whitespaces   cCsx|D]}t|djo|i||Pn|\}}}}}|i||ii||\|_|_|ttfjo|id7_d|_qqWdi |iS(Niiit( tlentcompatR9R.R5R/R0tNEWLINEtNLR (R1titerablettttok_typeRR!R"R#((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRs c Cs4t}g}|ii}|\}}|ttfjo|d7}n|ttfjo t}nx|D]}|d \}}|ttfjo|d7}n|tjo|i|qinb|t jo|i qinD|ttfjo t}n'|o|o||dt}n||qiWdS(NR3ii( tFalseR.R5tNAMEtNUMBERR=R>tTruetINDENTtDEDENTtpop( R1RR?t startlinetindentst toks_appendttoknumttokvalttok((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyR<s2         (RRR2R9RR<(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyR-s   scoding[:=]\s*([-\w.]+)cCsd|d iidd}|djp|idodS|d jp|id odS|S(s(Imitates get_normal_name in tokenizer.c.i Rt-sutf-8sutf-8-slatin-1s iso-8859-1s iso-latin-1slatin-1-s iso-8859-1-s iso-latin-1-(slatin-1s iso-8859-1s iso-latin-1(slatin-1-s iso-8859-1-s iso-latin-1-(tlowertreplacet startswith(torig_enctenc((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt_get_normal_names cstd}d}fd}fd}|}|itot|d}d}n|p |gfS||}|o||gfS|}|p||gfS||}|o|||gfS|||gfS(s The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argment, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (left as bytes) it has read in. It detects the encoding from the presence of a utf-8 bom or an encoding cookie as specified in pep-0263. If both a bom and a cookie are present, but disagree, a SyntaxError will be raised. If the encoding cookie is an invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, 'utf-8-sig' is returned. If no encoding is specified, then the default of 'utf-8' will be returned. sutf-8cs)y SWntj o tSXdS(N(t StopIterationtbytes((R*(s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt read_or_stops csy|id}Wntj odSXti|}|pdSt|d}yt|}Wn#tj otd|nXo.|i djotdn|d7}n|S(Ntasciiisunknown encoding: sutf-8sencoding problem: utf-8s-sig( tdecodetUnicodeDecodeErrortNonet cookie_retfindallRURt LookupErrort SyntaxErrortname(R#t line_stringtmatchestencodingtcodec(t bom_found(s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyt find_cookies"is utf-8-sigN(RBR\RRRRE(R*RdtdefaultRXRgtfirsttsecond((RfR*s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pytdetect_encodings,       cCst}|i|S(sTransform tokens back into Python source code. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited intput: # Output text will tokenize the back to the input t1 = [tok[:2] for tok in generate_tokens(f.readline)] newcode = untokenize(t1) readline = iter(newcode.splitlines(1)).next t2 = [tok[:2] for tokin generate_tokens(readline)] assert t1 == t2 (R-R(R?tut((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRDs c csd}}}tidd}}d\}}d}dg} xy |} Wntj o d} nX|d}dt| } } |o| ptd| fn|i| }|oO|id} }t|| | | ||f|| fVd\}}d}q|oY| ddjoH| d d jo7t || | |t| f|fVd}d}q@q|| }|| }q@n||djoH| o@| pPnd}x~| | jop| | d jo|d}nD| | d jo|t dt }n| | d jo d}nP| d} qW| | joPn| | djo| | djo{| | i d}| t|}t ||| f|| t|f| fVt | |||f|t| f| fVq@t t f| | dj| | || f|t| f| fVq@n|| djo5| i|t| | |df|| f| fVnx|| djoZ|| jotdd|| | fn| d } td|| f|| f| fVqlWn'| ptd|dffnd}x?| | jo1ti| | }|o|id\}}||f||f|}}} | ||!| |}}||jp|djo%|djot|||| fVq?|djo5t}|djo t }n||||| fVq?|djo0|id ptt |||| fVq?|tjo~t|}|i| | }|o:|id} | || !}t|||| f| fVq ||f} | |}| }Pq?|tjp"|d tjp|d tjoy|ddjoP||f} t|pt|dp t|d}| |d}}| }Pq t|||| fVq?||jot|||| fVq?|djo$t |||| f| fVd}q?|djo|d}n|djo|d}nt|||| fVqt | | || f|| df| fV| d} qWq@x2| dD]&}td|df|dfdfVqRWtd|df|dfdfVdS(sS The generate_tokens() generator requires one argment, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. Alternately, readline can be a callable function terminating with StopIteration: readline = open(myfile).next # Example of alternate readline The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed is the logical line; continuation lines are included. iRt 0123456789R:isEOF in multi-line stringis\ is\ R3s s s# t#s is3unindent does not match any outer indentation levels sEOF in multi-line statementt.s iis\s([{s)]}N(R:i(R:i(tstringt ascii_lettersR\RVR;RtmatchR"tSTRINGt ERRORTOKENttabsizetrstriptCOMMENTR>R5RFtIndentationErrorRGt pseudoprogtspanRDR=tendswithR4t triple_quotedtendprogst single_quotedRCtOPt ENDMARKER(R*tlnumtparenlevt continuedt namecharstnumcharstcontstrtneedconttcontlineRJR#tpostmaxtstrstarttendprogtendmatchR"tcolumnt comment_tokentnl_post pseudomatchR!tsposteposRtinitialtnewlinetindent((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyRYs       )      $ $  (  )                     $t__main__(s'''s"""sr'''sr"""sR'''sR"""su'''su"""sU'''sU"""sb'''sb"""sB'''sB"""sur'''sur"""sUr'''sUr"""suR'''suR"""sUR'''sUR"""sbr'''sbr"""sBr'''sBr"""sbR'''sbR"""sBR'''sBR"""(RRsr'sr"sR'sR"su'su"sU'sU"sb'sb"sB'sB"sur'sur"sUr'sUr"suR'suR"sUR'sUR"sbr'sbr"sBr'sBr"sbR'sbR"sBR'sBR"((Rt__doc__t __author__t __credits__RptretcodecsRRtlib2to3.pgen2.tokenR:Rt_[1]tdirtxt__all__RWt NameErrortstrR RRt WhitespacetCommenttIgnoretNamet Binnumbert Hexnumbert Octnumbert Decnumbert IntnumbertExponentt PointfloattExpfloatt Floatnumbert ImagnumbertNumbertSingletDoubletSingle3tDouble3tTripletStringtOperatortBrackettSpecialtFunnyt PlainTokentTokentContStrt PseudoExtrast PseudoTokentmaptcompilet tokenprogRyt single3progt double3progR\R}R|R@R~Rut ExceptionRRR(RR)R-R]RURkRRRtsysR;targvtopenR*tstdin(((s./usr/lib64/python2.6/lib2to3/pgen2/tokenize.pyts <          '#   8 H   pgen2/driver.pyo000066600000011667150501042300007610 0ustar00 Lc @sdZdZddgZddkZddkZddkZddkZddklZl Z l Z l Z l Z de fdYZd Zd deedd Zd ZdS( sZParser driver. This provides a high-level interface to parse a file into a syntax tree. s#Guido van Rossum tDrivert load_grammariNi(tgrammartparsettokenttokenizetpgencBsVeZdddZedZedZedZdedZedZ RS(cCs<||_|djoti}n||_||_dS(N(RtNonetloggingt getLoggertloggertconvert(tselfRR R ((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt__init__s    cCs'ti|i|i}|id}d}d }}}} } d} x|D]} | \}}}} } |||fjog|\} }|| jo"| d| |7} | }d}n||jo| | ||!7} |}qn|titifjoA| |7} | \}}|i do|d7}d}qQqQn|t i joti |}n|o$|i idt i||| n|i||| |fo |o|i idnPnd} | \}}|i do|d7}d}qQqQWtid||| |f|iS( s4Parse a series of tokens and return the syntax tree.iius s%s %r (prefix=%r)sStop.tsincomplete inputN(RtParserRR tsetupRRtCOMMENTtNLtendswithRtOPtopmapR tdebugttok_nametaddtokent ParseErrortrootnode(R ttokensRtptlinenotcolumnttypetvaluetstarttendt line_texttprefixt quintuplets_linenots_column((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_tokens%sT             cCs"ti|i}|i||S(s*Parse a stream and return the syntax tree.(Rtgenerate_tokenstreadlineR((R tstreamRR((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pytparse_stream_rawUscCs|i||S(s*Parse a stream and return the syntax tree.(R,(R R+R((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_streamZscCs;ti|d|}z|i||SWd|iXdS(s(Parse a file and return the syntax tree.trN(tcodecstopenR-tclose(R tfilenametencodingRR+((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_file^scCs(tit|i}|i||S(s*Parse a string and return the syntax tree.(RR)tgenerate_linestnextR((R ttextRR((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_stringfsN( t__name__t __module__RR tFalseR(R,R-R4R8(((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyRs  0  ccs7x|itD] }|VqWxto dVq"WdS(s<Generator that behaves like readline without using StringIO.RN(t splitlinestTrue(R7tline((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyR5ls  s Grammar.txtc Cs7|djoti}n|djoZtii|\}}|djo d}n||dittt i d}n|pt || o~|i d|t i|}|oT|i d|y|i|Wqtj o }|i dt|qXq3nti}|i||S( s'Load the grammar (maybe from a pickle).s.txtRt.s.pickles!Generating grammar tables from %ssWriting grammar tables to %ssWriting failed:N(RRR tostpathtsplitexttjointmaptstrtsyst version_infot_newertinfoRtgenerate_grammartdumptIOErrorRtGrammartload( tgttgptsavetforceR theadttailtgte((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyRts&    +$  cCsRtii|ptStii|ptStii|tii|jS(s0Inquire whether file a was written since file b.(R@RAtexistsR;R=tgetmtime(tatb((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyRHs (t__doc__t __author__t__all__R/R@RRFRRRRRRtobjectRR5RR=R;RRH(((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt s     (P pgen2/tokenize.py000066600000045250150501042300007761 0ustar00# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation. # All rights reserved. """Tokenization help for Python programs. generate_tokens(readline) is a generator that breaks a stream of text into Python tokens. It accepts a readline-like method which is called repeatedly to get the next line of input (or "" for EOF). It generates 5-tuples with these members: the token type (see token.py) the token (a string) the starting (row, column) indices of the token (a 2-tuple of ints) the ending (row, column) indices of the token (a 2-tuple of ints) the original line (string) It is designed to match the working of the Python tokenizer exactly, except that it produces COMMENT tokens for comments and gives type OP for all operators Older entry points tokenize_loop(readline, tokeneater) tokenize(readline, tokeneater=printtoken) are the same, except instead of generating tokens, tokeneater is a callback function to which the 5 fields described above are passed as 5 arguments, each time a new token is found.""" __author__ = 'Ka-Ping Yee ' __credits__ = \ 'GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro' import string, re from codecs import BOM_UTF8, lookup from lib2to3.pgen2.token import * from . import token __all__ = [x for x in dir(token) if x[0] != '_'] + ["tokenize", "generate_tokens", "untokenize"] del token try: bytes except NameError: # Support bytes type in Python <= 2.5, so 2to3 turns itself into # valid Python 3 code. bytes = str def group(*choices): return '(' + '|'.join(choices) + ')' def any(*choices): return group(*choices) + '*' def maybe(*choices): return group(*choices) + '?' Whitespace = r'[ \f\t]*' Comment = r'#[^\r\n]*' Ignore = Whitespace + any(r'\\\r?\n' + Whitespace) + maybe(Comment) Name = r'[a-zA-Z_]\w*' Binnumber = r'0[bB][01]*' Hexnumber = r'0[xX][\da-fA-F]*[lL]?' Octnumber = r'0[oO]?[0-7]*[lL]?' Decnumber = r'[1-9]\d*[lL]?' Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber) Exponent = r'[eE][-+]?\d+' Pointfloat = group(r'\d+\.\d*', r'\.\d+') + maybe(Exponent) Expfloat = r'\d+' + Exponent Floatnumber = group(Pointfloat, Expfloat) Imagnumber = group(r'\d+[jJ]', Floatnumber + r'[jJ]') Number = group(Imagnumber, Floatnumber, Intnumber) # Tail end of ' string. Single = r"[^'\\]*(?:\\.[^'\\]*)*'" # Tail end of " string. Double = r'[^"\\]*(?:\\.[^"\\]*)*"' # Tail end of ''' string. Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''" # Tail end of """ string. Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""' Triple = group("[ubUB]?[rR]?'''", '[ubUB]?[rR]?"""') # Single-line ' or " string. String = group(r"[uU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'", r'[uU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"') # Because of leftmost-then-longest match semantics, be sure to put the # longest operators first (e.g., if = came before ==, == would get # recognized as two instances of =). Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"<>", r"!=", r"//=?", r"->", r"[+\-*/%&|^=<>]=?", r"~") Bracket = '[][(){}]' Special = group(r'\r?\n', r'[:;.,`@]') Funny = group(Operator, Bracket, Special) PlainToken = group(Number, Funny, String, Name) Token = Ignore + PlainToken # First (or only) line of ' or " string. ContStr = group(r"[uUbB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*" + group("'", r'\\\r?\n'), r'[uUbB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*' + group('"', r'\\\r?\n')) PseudoExtras = group(r'\\\r?\n', Comment, Triple) PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name) tokenprog, pseudoprog, single3prog, double3prog = map( re.compile, (Token, PseudoToken, Single3, Double3)) endprogs = {"'": re.compile(Single), '"': re.compile(Double), "'''": single3prog, '"""': double3prog, "r'''": single3prog, 'r"""': double3prog, "u'''": single3prog, 'u"""': double3prog, "b'''": single3prog, 'b"""': double3prog, "ur'''": single3prog, 'ur"""': double3prog, "br'''": single3prog, 'br"""': double3prog, "R'''": single3prog, 'R"""': double3prog, "U'''": single3prog, 'U"""': double3prog, "B'''": single3prog, 'B"""': double3prog, "uR'''": single3prog, 'uR"""': double3prog, "Ur'''": single3prog, 'Ur"""': double3prog, "UR'''": single3prog, 'UR"""': double3prog, "bR'''": single3prog, 'bR"""': double3prog, "Br'''": single3prog, 'Br"""': double3prog, "BR'''": single3prog, 'BR"""': double3prog, 'r': None, 'R': None, 'u': None, 'U': None, 'b': None, 'B': None} triple_quoted = {} for t in ("'''", '"""', "r'''", 'r"""', "R'''", 'R"""', "u'''", 'u"""', "U'''", 'U"""', "b'''", 'b"""', "B'''", 'B"""', "ur'''", 'ur"""', "Ur'''", 'Ur"""', "uR'''", 'uR"""', "UR'''", 'UR"""', "br'''", 'br"""', "Br'''", 'Br"""', "bR'''", 'bR"""', "BR'''", 'BR"""',): triple_quoted[t] = t single_quoted = {} for t in ("'", '"', "r'", 'r"', "R'", 'R"', "u'", 'u"', "U'", 'U"', "b'", 'b"', "B'", 'B"', "ur'", 'ur"', "Ur'", 'Ur"', "uR'", 'uR"', "UR'", 'UR"', "br'", 'br"', "Br'", 'Br"', "bR'", 'bR"', "BR'", 'BR"', ): single_quoted[t] = t tabsize = 8 class TokenError(Exception): pass class StopTokenizing(Exception): pass def printtoken(type, token, start, end, line): # for testing (srow, scol) = start (erow, ecol) = end print "%d,%d-%d,%d:\t%s\t%s" % \ (srow, scol, erow, ecol, tok_name[type], repr(token)) def tokenize(readline, tokeneater=printtoken): """ The tokenize() function accepts two parameters: one representing the input stream, and one providing an output mechanism for tokenize(). The first parameter, readline, must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. The second parameter, tokeneater, must also be a callable object. It is called once for each token, with five arguments, corresponding to the tuples generated by generate_tokens(). """ try: tokenize_loop(readline, tokeneater) except StopTokenizing: pass # backwards compatible interface def tokenize_loop(readline, tokeneater): for token_info in generate_tokens(readline): tokeneater(*token_info) class Untokenizer: def __init__(self): self.tokens = [] self.prev_row = 1 self.prev_col = 0 def add_whitespace(self, start): row, col = start assert row <= self.prev_row col_offset = col - self.prev_col if col_offset: self.tokens.append(" " * col_offset) def untokenize(self, iterable): for t in iterable: if len(t) == 2: self.compat(t, iterable) break tok_type, token, start, end, line = t self.add_whitespace(start) self.tokens.append(token) self.prev_row, self.prev_col = end if tok_type in (NEWLINE, NL): self.prev_row += 1 self.prev_col = 0 return "".join(self.tokens) def compat(self, token, iterable): startline = False indents = [] toks_append = self.tokens.append toknum, tokval = token if toknum in (NAME, NUMBER): tokval += ' ' if toknum in (NEWLINE, NL): startline = True for tok in iterable: toknum, tokval = tok[:2] if toknum in (NAME, NUMBER): tokval += ' ' if toknum == INDENT: indents.append(tokval) continue elif toknum == DEDENT: indents.pop() continue elif toknum in (NEWLINE, NL): startline = True elif startline and indents: toks_append(indents[-1]) startline = False toks_append(tokval) cookie_re = re.compile("coding[:=]\s*([-\w.]+)") def _get_normal_name(orig_enc): """Imitates get_normal_name in tokenizer.c.""" # Only care about the first 12 characters. enc = orig_enc[:12].lower().replace("_", "-") if enc == "utf-8" or enc.startswith("utf-8-"): return "utf-8" if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \ enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")): return "iso-8859-1" return orig_enc def detect_encoding(readline): """ The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argment, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (left as bytes) it has read in. It detects the encoding from the presence of a utf-8 bom or an encoding cookie as specified in pep-0263. If both a bom and a cookie are present, but disagree, a SyntaxError will be raised. If the encoding cookie is an invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, 'utf-8-sig' is returned. If no encoding is specified, then the default of 'utf-8' will be returned. """ bom_found = False encoding = None default = 'utf-8' def read_or_stop(): try: return readline() except StopIteration: return bytes() def find_cookie(line): try: line_string = line.decode('ascii') except UnicodeDecodeError: return None matches = cookie_re.findall(line_string) if not matches: return None encoding = _get_normal_name(matches[0]) try: codec = lookup(encoding) except LookupError: # This behaviour mimics the Python interpreter raise SyntaxError("unknown encoding: " + encoding) if bom_found: if codec.name != 'utf-8': # This behaviour mimics the Python interpreter raise SyntaxError('encoding problem: utf-8') encoding += '-sig' return encoding first = read_or_stop() if first.startswith(BOM_UTF8): bom_found = True first = first[3:] default = 'utf-8-sig' if not first: return default, [] encoding = find_cookie(first) if encoding: return encoding, [first] second = read_or_stop() if not second: return default, [first] encoding = find_cookie(second) if encoding: return encoding, [first, second] return default, [first, second] def untokenize(iterable): """Transform tokens back into Python source code. Each element returned by the iterable must be a token sequence with at least two elements, a token number and token value. If only two tokens are passed, the resulting output is poor. Round-trip invariant for full input: Untokenized source will match input source exactly Round-trip invariant for limited intput: # Output text will tokenize the back to the input t1 = [tok[:2] for tok in generate_tokens(f.readline)] newcode = untokenize(t1) readline = iter(newcode.splitlines(1)).next t2 = [tok[:2] for tokin generate_tokens(readline)] assert t1 == t2 """ ut = Untokenizer() return ut.untokenize(iterable) def generate_tokens(readline): """ The generate_tokens() generator requires one argment, readline, which must be a callable object which provides the same interface as the readline() method of built-in file objects. Each call to the function should return one line of input as a string. Alternately, readline can be a callable function terminating with StopIteration: readline = open(myfile).next # Example of alternate readline The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifying the row and column where the token ends in the source; and the line on which the token was found. The line passed is the logical line; continuation lines are included. """ lnum = parenlev = continued = 0 namechars, numchars = string.ascii_letters + '_', '0123456789' contstr, needcont = '', 0 contline = None indents = [0] while 1: # loop over lines in stream try: line = readline() except StopIteration: line = '' lnum = lnum + 1 pos, max = 0, len(line) if contstr: # continued string if not line: raise TokenError, ("EOF in multi-line string", strstart) endmatch = endprog.match(line) if endmatch: pos = end = endmatch.end(0) yield (STRING, contstr + line[:end], strstart, (lnum, end), contline + line) contstr, needcont = '', 0 contline = None elif needcont and line[-2:] != '\\\n' and line[-3:] != '\\\r\n': yield (ERRORTOKEN, contstr + line, strstart, (lnum, len(line)), contline) contstr = '' contline = None continue else: contstr = contstr + line contline = contline + line continue elif parenlev == 0 and not continued: # new statement if not line: break column = 0 while pos < max: # measure leading whitespace if line[pos] == ' ': column = column + 1 elif line[pos] == '\t': column = (column//tabsize + 1)*tabsize elif line[pos] == '\f': column = 0 else: break pos = pos + 1 if pos == max: break if line[pos] in '#\r\n': # skip comments or blank lines if line[pos] == '#': comment_token = line[pos:].rstrip('\r\n') nl_pos = pos + len(comment_token) yield (COMMENT, comment_token, (lnum, pos), (lnum, pos + len(comment_token)), line) yield (NL, line[nl_pos:], (lnum, nl_pos), (lnum, len(line)), line) else: yield ((NL, COMMENT)[line[pos] == '#'], line[pos:], (lnum, pos), (lnum, len(line)), line) continue if column > indents[-1]: # count indents or dedents indents.append(column) yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line) while column < indents[-1]: if column not in indents: raise IndentationError( "unindent does not match any outer indentation level", ("", lnum, pos, line)) indents = indents[:-1] yield (DEDENT, '', (lnum, pos), (lnum, pos), line) else: # continued statement if not line: raise TokenError, ("EOF in multi-line statement", (lnum, 0)) continued = 0 while pos < max: pseudomatch = pseudoprog.match(line, pos) if pseudomatch: # scan for tokens start, end = pseudomatch.span(1) spos, epos, pos = (lnum, start), (lnum, end), end token, initial = line[start:end], line[start] if initial in numchars or \ (initial == '.' and token != '.'): # ordinary number yield (NUMBER, token, spos, epos, line) elif initial in '\r\n': newline = NEWLINE if parenlev > 0: newline = NL yield (newline, token, spos, epos, line) elif initial == '#': assert not token.endswith("\n") yield (COMMENT, token, spos, epos, line) elif token in triple_quoted: endprog = endprogs[token] endmatch = endprog.match(line, pos) if endmatch: # all on one line pos = endmatch.end(0) token = line[start:pos] yield (STRING, token, spos, (lnum, pos), line) else: strstart = (lnum, start) # multiple lines contstr = line[start:] contline = line break elif initial in single_quoted or \ token[:2] in single_quoted or \ token[:3] in single_quoted: if token[-1] == '\n': # continued string strstart = (lnum, start) endprog = (endprogs[initial] or endprogs[token[1]] or endprogs[token[2]]) contstr, needcont = line[start:], 1 contline = line break else: # ordinary string yield (STRING, token, spos, epos, line) elif initial in namechars: # ordinary name yield (NAME, token, spos, epos, line) elif initial == '\\': # continued stmt # This yield is new; needed for better idempotency: yield (NL, token, spos, (lnum, pos), line) continued = 1 else: if initial in '([{': parenlev = parenlev + 1 elif initial in ')]}': parenlev = parenlev - 1 yield (OP, token, spos, epos, line) else: yield (ERRORTOKEN, line[pos], (lnum, pos), (lnum, pos+1), line) pos = pos + 1 for indent in indents[1:]: # pop remaining indent levels yield (DEDENT, '', (lnum, 0), (lnum, 0), '') yield (ENDMARKER, '', (lnum, 0), (lnum, 0), '') if __name__ == '__main__': # testing import sys if len(sys.argv) > 1: tokenize(open(sys.argv[1]).readline) else: tokenize(sys.stdin.readline) pgen2/driver.py000066600000011375150501042300007425 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Modifications: # Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Parser driver. This provides a high-level interface to parse a file into a syntax tree. """ __author__ = "Guido van Rossum " __all__ = ["Driver", "load_grammar"] # Python imports import codecs import os import logging import sys # Pgen imports from . import grammar, parse, token, tokenize, pgen class Driver(object): def __init__(self, grammar, convert=None, logger=None): self.grammar = grammar if logger is None: logger = logging.getLogger() self.logger = logger self.convert = convert def parse_tokens(self, tokens, debug=False): """Parse a series of tokens and return the syntax tree.""" # XXX Move the prefix computation into a wrapper around tokenize. p = parse.Parser(self.grammar, self.convert) p.setup() lineno = 1 column = 0 type = value = start = end = line_text = None prefix = u"" for quintuple in tokens: type, value, start, end, line_text = quintuple if start != (lineno, column): assert (lineno, column) <= start, ((lineno, column), start) s_lineno, s_column = start if lineno < s_lineno: prefix += "\n" * (s_lineno - lineno) lineno = s_lineno column = 0 if column < s_column: prefix += line_text[column:s_column] column = s_column if type in (tokenize.COMMENT, tokenize.NL): prefix += value lineno, column = end if value.endswith("\n"): lineno += 1 column = 0 continue if type == token.OP: type = grammar.opmap[value] if debug: self.logger.debug("%s %r (prefix=%r)", token.tok_name[type], value, prefix) if p.addtoken(type, value, (prefix, start)): if debug: self.logger.debug("Stop.") break prefix = "" lineno, column = end if value.endswith("\n"): lineno += 1 column = 0 else: # We never broke out -- EOF is too soon (how can this happen???) raise parse.ParseError("incomplete input", type, value, (prefix, start)) return p.rootnode def parse_stream_raw(self, stream, debug=False): """Parse a stream and return the syntax tree.""" tokens = tokenize.generate_tokens(stream.readline) return self.parse_tokens(tokens, debug) def parse_stream(self, stream, debug=False): """Parse a stream and return the syntax tree.""" return self.parse_stream_raw(stream, debug) def parse_file(self, filename, encoding=None, debug=False): """Parse a file and return the syntax tree.""" stream = codecs.open(filename, "r", encoding) try: return self.parse_stream(stream, debug) finally: stream.close() def parse_string(self, text, debug=False): """Parse a string and return the syntax tree.""" tokens = tokenize.generate_tokens(generate_lines(text).next) return self.parse_tokens(tokens, debug) def generate_lines(text): """Generator that behaves like readline without using StringIO.""" for line in text.splitlines(True): yield line while True: yield "" def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None): """Load the grammar (maybe from a pickle).""" if logger is None: logger = logging.getLogger() if gp is None: head, tail = os.path.splitext(gt) if tail == ".txt": tail = "" gp = head + tail + ".".join(map(str, sys.version_info)) + ".pickle" if force or not _newer(gp, gt): logger.info("Generating grammar tables from %s", gt) g = pgen.generate_grammar(gt) if save: logger.info("Writing grammar tables to %s", gp) try: g.dump(gp) except IOError, e: logger.info("Writing failed:"+str(e)) else: g = grammar.Grammar() g.load(gp) return g def _newer(a, b): """Inquire whether file a was written since file b.""" if not os.path.exists(a): return False if not os.path.exists(b): return True return os.path.getmtime(a) >= os.path.getmtime(b) pgen2/token.py000066600000002337150501042300007250 0ustar00#! /usr/bin/env python2.6 """Token constants (from "token.h").""" # Taken from Python (r53757) and modified to include some tokens # originally monkeypatched in by pgen2.tokenize #--start constants-- ENDMARKER = 0 NAME = 1 NUMBER = 2 STRING = 3 NEWLINE = 4 INDENT = 5 DEDENT = 6 LPAR = 7 RPAR = 8 LSQB = 9 RSQB = 10 COLON = 11 COMMA = 12 SEMI = 13 PLUS = 14 MINUS = 15 STAR = 16 SLASH = 17 VBAR = 18 AMPER = 19 LESS = 20 GREATER = 21 EQUAL = 22 DOT = 23 PERCENT = 24 BACKQUOTE = 25 LBRACE = 26 RBRACE = 27 EQEQUAL = 28 NOTEQUAL = 29 LESSEQUAL = 30 GREATEREQUAL = 31 TILDE = 32 CIRCUMFLEX = 33 LEFTSHIFT = 34 RIGHTSHIFT = 35 DOUBLESTAR = 36 PLUSEQUAL = 37 MINEQUAL = 38 STAREQUAL = 39 SLASHEQUAL = 40 PERCENTEQUAL = 41 AMPEREQUAL = 42 VBAREQUAL = 43 CIRCUMFLEXEQUAL = 44 LEFTSHIFTEQUAL = 45 RIGHTSHIFTEQUAL = 46 DOUBLESTAREQUAL = 47 DOUBLESLASH = 48 DOUBLESLASHEQUAL = 49 AT = 50 OP = 51 COMMENT = 52 NL = 53 RARROW = 54 ERRORTOKEN = 55 N_TOKENS = 56 NT_OFFSET = 256 #--end constants-- tok_name = {} for _name, _value in globals().items(): if type(_value) is type(0): tok_name[_value] = _name def ISTERMINAL(x): return x < NT_OFFSET def ISNONTERMINAL(x): return x >= NT_OFFSET def ISEOF(x): return x == ENDMARKER pgen2/pgen.pyc000066600000030070150501042300007217 0ustar00 Lc@sddklZlZlZdeifdYZdefdYZdefdYZdefd YZ d d Z d S( i(tgrammarttokenttokenizet PgenGrammarcBseZRS((t__name__t __module__(((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRstParserGeneratorcBseZddZdZdZdZdZdZdZ dZ dZ d Z d Z d Zd Zd ZdZddZdZdZRS(cCsd}|djot|}|i}n||_||_ti|i|_|i |i \|_ |_ |dj o |nh|_ |idS(N(tNonetopentclosetfilenametstreamRtgenerate_tokenstreadlinet generatortgettokentparsetdfast startsymboltfirstt addfirstsets(tselfR R t close_stream((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyt__init__ s         c Cst}|ii}|i|i|i|id|ix;|D]3}dt|i}||i|<||i | %ds %s -> %d(t enumerateRR#RR!R( RR*R'RbttodoR+R-R.R/tj((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pytdump_nfas   !      cCsdG|GHxpt|D]b\}}dG|G|iodpdGHx5|iiD]$\}}d||i|fGHqPWqWdS(NsDump of DFA fors States(final)Res %s -> %d(RfR$RR R#(RR*R,R+R-R.R/((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pytdump_dfas   cCst}x|ot}xt|D]z\}}xkt|dt|D]P}||}||jo3||=x|D]}|i||qtWt}PqIqIWq#Wq WdS(Ni(tTruetFalseRftrangeRt unifystate(RR,tchangesR+tstate_iRhtstate_jR-((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRUs    cCs|i\}}|idjo ||fSt}t}|i||i|xK|idjo:|i|i\}}|i||i|q\W||fSdS(Nt|(t parse_altRCR]RaR(RRVRWtaatzz((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRSs       cCsu|i\}}xV|idjp|ititifjo)|i\}}|i||}qW||fS(Nt(t[(RvRw(t parse_itemRCRNRR@tSTRINGRa(RRVtbR(td((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRs s  cCs|idjoG|i|i\}}|itid|i|||fS|i\}}|i}|djo ||fS|i|i||djo ||fS||fSdS(NRwt]t+t*(R}R~(RCRRSRQRRRRat parse_atom(RRVRWRC((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRxs          cCs|idjo:|i|i\}}|itid||fS|ititifjo:t }t }|i ||i|i||fS|i d|i|idS(NRvt)s+expected (...) or NAME or STRING, got %s/%s( RCRRSRQRRRRNR@RyR]Rat raise_error(RRVRW((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR(s       cCsg|i|jp|dj o3|i|jo#|id|||i|in|i}|i|S(Nsexpected %s/%s, got %s/%s(RNRRCRR(RRNRC((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRQ9s -   cCsk|ii}x1|dtitifjo|ii}qW|\|_|_|_|_|_ dS(Ni( RR/RtCOMMENTtNLRNRCtbegintendtline(Rttup((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRAs c Gsu|o;y||}WqBdi|gtt|}qBXnt||i|id|id|ifdS(Nt ii(tjointmaptstrt SyntaxErrorR RR(Rtmsgtargs((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRHs'N(RRRRR0R&R"RRDRRTRiRjRURSRsRxRRQRR(((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR s$   .    $        R]cBseZdZddZRS(cCs g|_dS(N(R(R((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRSscCsU|djpt|tptt|tpt|ii||fdS(N(RR9RR;R]RR!(RR/R.((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRaVs$N(RRRRRa(((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR]Qs R^cBs2eZdZdZdZdZdZRS(cCsvt|tpttt|itptt|tpt||_||j|_h|_dS(N( R9tdictR;titerR/R]R_R$R(RR_tfinal((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR]s # cCsVt|tpt||ijptt|tpt||i|s H %pgen2/driver.pyc000066600000011765150501042300007573 0ustar00 Lc @sdZdZddgZddkZddkZddkZddkZddklZl Z l Z l Z l Z de fdYZd Zd deedd Zd ZdS( sZParser driver. This provides a high-level interface to parse a file into a syntax tree. s#Guido van Rossum tDrivert load_grammariNi(tgrammartparsettokenttokenizetpgencBsVeZdddZedZedZedZdedZedZ RS(cCs<||_|djoti}n||_||_dS(N(RtNonetloggingt getLoggertloggertconvert(tselfRR R ((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt__init__s    cCsPti|i|i}|id}d}d }}}} } d} x|D]} | \}}}} } |||fjo||f|jpt||f|f|\} }|| jo"| d| |7} | }d}n||jo| | ||!7} |}qn|titi fjoA| |7} | \}}|i do|d7}d}qQqQn|t i joti |}n|o$|iidt i||| n|i||| |fo |o|iidnPnd} | \}}|i do|d7}d}qQqQWtid||| |f|iS( s4Parse a series of tokens and return the syntax tree.iius s%s %r (prefix=%r)sStop.tsincomplete inputN(RtParserRR tsetupRtAssertionErrorRtCOMMENTtNLtendswithRtOPtopmapR tdebugttok_nametaddtokent ParseErrortrootnode(R ttokensRtptlinenotcolumnttypetvaluetstarttendt line_texttprefixt quintuplets_linenots_column((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_tokens%sV )            cCs"ti|i}|i||S(s*Parse a stream and return the syntax tree.(Rtgenerate_tokenstreadlineR)(R tstreamRR((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pytparse_stream_rawUscCs|i||S(s*Parse a stream and return the syntax tree.(R-(R R,R((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_streamZscCs;ti|d|}z|i||SWd|iXdS(s(Parse a file and return the syntax tree.trN(tcodecstopenR.tclose(R tfilenametencodingRR,((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_file^scCs(tit|i}|i||S(s*Parse a string and return the syntax tree.(RR*tgenerate_linestnextR)(R ttextRR((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt parse_stringfsN( t__name__t __module__RR tFalseR)R-R.R5R9(((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyRs  0  ccs7x|itD] }|VqWxto dVq"WdS(s<Generator that behaves like readline without using StringIO.RN(t splitlinestTrue(R8tline((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyR6ls  s Grammar.txtc Cs7|djoti}n|djoZtii|\}}|djo d}n||dittt i d}n|pt || o~|i d|t i|}|oT|i d|y|i|Wqtj o }|i dt|qXq3nti}|i||S( s'Load the grammar (maybe from a pickle).s.txtRt.s.pickles!Generating grammar tables from %ssWriting grammar tables to %ssWriting failed:N(RRR tostpathtsplitexttjointmaptstrtsyst version_infot_newertinfoRtgenerate_grammartdumptIOErrorRtGrammartload( tgttgptsavetforceR theadttailtgte((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyRts&    +$  cCsRtii|ptStii|ptStii|tii|jS(s0Inquire whether file a was written since file b.(RARBtexistsR<R>tgetmtime(tatb((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyRIs (t__doc__t __author__t__all__R0RARRGRRRRRRtobjectRR6RR>R<RRI(((s,/usr/lib64/python2.6/lib2to3/pgen2/driver.pyt s     (P pgen2/__init__.py000066600000000217150501042300007662 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """The pgen2 package.""" pgen2/token.pyo000066600000004352150501042300007426 0ustar00 Lc@sdZdZdZdZdZdZdZdZdZd Z d Z d Z d Z d Z dZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZdZd Z d!Z!d"Z"d#Z#d$Z$d%Z%d&Z&d'Z'd(Z(d)Z)d*Z*d+Z+d,Z,d-Z-d.Z.d/Z/d0Z0d1Z1d2Z2d3Z3d4Z4d5Z5d6Z6d7Z7d8Z8d9Z9d:Z:hZ;xDe<i=D]3\Z>Z?e@e?e@djoe>e;e?S(?s!Token constants (from "token.h").iiiiiiiiii i i i i iiiiiiiiiiiiiiiiiii i!i"i#i$i%i&i'i(i)i*i+i,i-i.i/i0i1i2i3i4i5i6i7i8icCs |tjS(N(t NT_OFFSET(tx((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pyt ISTERMINALKscCs |tjS(N(R(R((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pyt ISNONTERMINALNscCs |tjS(N(t ENDMARKER(R((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pytISEOFQsN(Dt__doc__RtNAMEtNUMBERtSTRINGtNEWLINEtINDENTtDEDENTtLPARtRPARtLSQBtRSQBtCOLONtCOMMAtSEMItPLUStMINUStSTARtSLASHtVBARtAMPERtLESStGREATERtEQUALtDOTtPERCENTt BACKQUOTEtLBRACEtRBRACEtEQEQUALtNOTEQUALt LESSEQUALt GREATEREQUALtTILDEt CIRCUMFLEXt LEFTSHIFTt RIGHTSHIFTt DOUBLESTARt PLUSEQUALtMINEQUALt STAREQUALt SLASHEQUALt PERCENTEQUALt AMPEREQUALt VBAREQUALtCIRCUMFLEXEQUALtLEFTSHIFTEQUALtRIGHTSHIFTEQUALtDOUBLESTAREQUALt DOUBLESLASHtDOUBLESLASHEQUALtATtOPtCOMMENTtNLtRARROWt ERRORTOKENtN_TOKENSRttok_nametglobalstitemst_namet_valuettypeRRR(((s+/usr/lib64/python2.6/lib2to3/pgen2/token.pyts   pgen2/grammar.py000066600000012375150501042300007561 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """This module defines the data structures used to represent a grammar. These are a bit arcane because they are derived from the data structures used by Python's 'pgen' parser generator. There's also a table here mapping operators to their names in the token module; the Python tokenize module reports all operators as the fallback token code OP, but the parser needs the actual token code. """ # Python imports import pickle # Local imports from . import token, tokenize class Grammar(object): """Pgen parsing tables tables conversion class. Once initialized, this class supplies the grammar tables for the parsing engine implemented by parse.py. The parsing engine accesses the instance variables directly. The class here does not provide initialization of the tables; several subclasses exist to do this (see the conv and pgen modules). The load() method reads the tables from a pickle file, which is much faster than the other ways offered by subclasses. The pickle file is written by calling dump() (after loading the grammar tables using a subclass). The report() method prints a readable representation of the tables to stdout, for debugging. The instance variables are as follows: symbol2number -- a dict mapping symbol names to numbers. Symbol numbers are always 256 or higher, to distinguish them from token numbers, which are between 0 and 255 (inclusive). number2symbol -- a dict mapping numbers to symbol names; these two are each other's inverse. states -- a list of DFAs, where each DFA is a list of states, each state is is a list of arcs, and each arc is a (i, j) pair where i is a label and j is a state number. The DFA number is the index into this list. (This name is slightly confusing.) Final states are represented by a special arc of the form (0, j) where j is its own state number. dfas -- a dict mapping symbol numbers to (DFA, first) pairs, where DFA is an item from the states list above, and first is a set of tokens that can begin this grammar rule (represented by a dict whose values are always 1). labels -- a list of (x, y) pairs where x is either a token number or a symbol number, and y is either None or a string; the strings are keywords. The label number is the index in this list; label numbers are used to mark state transitions (arcs) in the DFAs. start -- the number of the grammar's start symbol. keywords -- a dict mapping keyword strings to arc labels. tokens -- a dict mapping token numbers to arc labels. """ def __init__(self): self.symbol2number = {} self.number2symbol = {} self.states = [] self.dfas = {} self.labels = [(0, "EMPTY")] self.keywords = {} self.tokens = {} self.symbol2label = {} self.start = 256 def dump(self, filename): """Dump the grammar tables to a pickle file.""" f = open(filename, "wb") pickle.dump(self.__dict__, f, 2) f.close() def load(self, filename): """Load the grammar tables from a pickle file.""" f = open(filename, "rb") d = pickle.load(f) f.close() self.__dict__.update(d) def copy(self): """ Copy the grammar. """ new = self.__class__() for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords", "tokens", "symbol2label"): setattr(new, dict_attr, getattr(self, dict_attr).copy()) new.labels = self.labels[:] new.states = self.states[:] new.start = self.start return new def report(self): """Dump the grammar tables to standard output, for debugging.""" from pprint import pprint print "s2n" pprint(self.symbol2number) print "n2s" pprint(self.number2symbol) print "states" pprint(self.states) print "dfas" pprint(self.dfas) print "labels" pprint(self.labels) print "start", self.start # Map from operator to number (since tokenize doesn't do this) opmap_raw = """ ( LPAR ) RPAR [ LSQB ] RSQB : COLON , COMMA ; SEMI + PLUS - MINUS * STAR / SLASH | VBAR & AMPER < LESS > GREATER = EQUAL . DOT % PERCENT ` BACKQUOTE { LBRACE } RBRACE @ AT == EQEQUAL != NOTEQUAL <> NOTEQUAL <= LESSEQUAL >= GREATEREQUAL ~ TILDE ^ CIRCUMFLEX << LEFTSHIFT >> RIGHTSHIFT ** DOUBLESTAR += PLUSEQUAL -= MINEQUAL *= STAREQUAL /= SLASHEQUAL %= PERCENTEQUAL &= AMPEREQUAL |= VBAREQUAL ^= CIRCUMFLEXEQUAL <<= LEFTSHIFTEQUAL >>= RIGHTSHIFTEQUAL **= DOUBLESTAREQUAL // DOUBLESLASH //= DOUBLESLASHEQUAL -> RARROW """ opmap = {} for line in opmap_raw.splitlines(): if line: op, name = line.split() opmap[op] = getattr(token, name) pgen2/parse.py000066600000017565150501042300007253 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Parser engine for the grammar tables generated by pgen. The grammar table must be loaded first. See Parser/parser.c in the Python distribution for additional info on how this parsing engine works. """ # Local imports from . import token class ParseError(Exception): """Exception to signal the parser is stuck.""" def __init__(self, msg, type, value, context): Exception.__init__(self, "%s: type=%r, value=%r, context=%r" % (msg, type, value, context)) self.msg = msg self.type = type self.value = value self.context = context class Parser(object): """Parser engine. The proper usage sequence is: p = Parser(grammar, [converter]) # create instance p.setup([start]) # prepare for parsing : if p.addtoken(...): # parse a token; may raise ParseError break root = p.rootnode # root of abstract syntax tree A Parser instance may be reused by calling setup() repeatedly. A Parser instance contains state pertaining to the current token sequence, and should not be used concurrently by different threads to parse separate token sequences. See driver.py for how to get input tokens by tokenizing a file or string. Parsing is complete when addtoken() returns True; the root of the abstract syntax tree can then be retrieved from the rootnode instance variable. When a syntax error occurs, addtoken() raises the ParseError exception. There is no error recovery; the parser cannot be used after a syntax error was reported (but it can be reinitialized by calling setup()). """ def __init__(self, grammar, convert=None): """Constructor. The grammar argument is a grammar.Grammar instance; see the grammar module for more information. The parser is not ready yet for parsing; you must call the setup() method to get it started. The optional convert argument is a function mapping concrete syntax tree nodes to abstract syntax tree nodes. If not given, no conversion is done and the syntax tree produced is the concrete syntax tree. If given, it must be a function of two arguments, the first being the grammar (a grammar.Grammar instance), and the second being the concrete syntax tree node to be converted. The syntax tree is converted from the bottom up. A concrete syntax tree node is a (type, value, context, nodes) tuple, where type is the node type (a token or symbol number), value is None for symbols and a string for tokens, context is None or an opaque value used for error reporting (typically a (lineno, offset) pair), and nodes is a list of children for symbols, and None for tokens. An abstract syntax tree node may be anything; this is entirely up to the converter function. """ self.grammar = grammar self.convert = convert or (lambda grammar, node: node) def setup(self, start=None): """Prepare for parsing. This *must* be called before starting to parse. The optional argument is an alternative start symbol; it defaults to the grammar's start symbol. You can use a Parser instance to parse any number of programs; each time you call setup() the parser is reset to an initial state determined by the (implicit or explicit) start symbol. """ if start is None: start = self.grammar.start # Each stack entry is a tuple: (dfa, state, node). # A node is a tuple: (type, value, context, children), # where children is a list of nodes or None, and context may be None. newnode = (start, None, None, []) stackentry = (self.grammar.dfas[start], 0, newnode) self.stack = [stackentry] self.rootnode = None self.used_names = set() # Aliased to self.rootnode.used_names in pop() def addtoken(self, type, value, context): """Add a token; return True iff this is the end of the program.""" # Map from token to label ilabel = self.classify(type, value, context) # Loop until the token is shifted; may raise exceptions while True: dfa, state, node = self.stack[-1] states, first = dfa arcs = states[state] # Look for a state with this label for i, newstate in arcs: t, v = self.grammar.labels[i] if ilabel == i: # Look it up in the list of labels assert t < 256 # Shift a token; we're done with it self.shift(type, value, newstate, context) # Pop while we are in an accept-only state state = newstate while states[state] == [(0, state)]: self.pop() if not self.stack: # Done parsing! return True dfa, state, node = self.stack[-1] states, first = dfa # Done with this token return False elif t >= 256: # See if it's a symbol and if we're in its first set itsdfa = self.grammar.dfas[t] itsstates, itsfirst = itsdfa if ilabel in itsfirst: # Push a symbol self.push(t, self.grammar.dfas[t], newstate, context) break # To continue the outer while loop else: if (0, state) in arcs: # An accepting state, pop it and try something else self.pop() if not self.stack: # Done parsing, but another token is input raise ParseError("too much input", type, value, context) else: # No success finding a transition raise ParseError("bad input", type, value, context) def classify(self, type, value, context): """Turn a token into a label. (Internal)""" if type == token.NAME: # Keep a listing of all used names self.used_names.add(value) # Check for reserved words ilabel = self.grammar.keywords.get(value) if ilabel is not None: return ilabel ilabel = self.grammar.tokens.get(type) if ilabel is None: raise ParseError("bad token", type, value, context) return ilabel def shift(self, type, value, newstate, context): """Shift a token. (Internal)""" dfa, state, node = self.stack[-1] newnode = (type, value, context, None) newnode = self.convert(self.grammar, newnode) if newnode is not None: node[-1].append(newnode) self.stack[-1] = (dfa, newstate, node) def push(self, type, newdfa, newstate, context): """Push a nonterminal. (Internal)""" dfa, state, node = self.stack[-1] newnode = (type, None, context, []) self.stack[-1] = (dfa, newstate, node) self.stack.append((newdfa, 0, newnode)) def pop(self): """Pop a nonterminal. (Internal)""" popdfa, popstate, popnode = self.stack.pop() newnode = self.convert(self.grammar, popnode) if newnode is not None: if self.stack: dfa, state, node = self.stack[-1] node[-1].append(newnode) else: self.rootnode = newnode self.rootnode.used_names = self.used_names pgen2/conv.py000066600000022631150501042300007074 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Convert graminit.[ch] spit out by pgen to Python code. Pgen is the Python parser generator. It is useful to quickly create a parser from a grammar file in Python's grammar notation. But I don't want my parsers to be written in C (yet), so I'm translating the parsing tables to Python data structures and writing a Python parse engine. Note that the token numbers are constants determined by the standard Python tokenizer. The standard token module defines these numbers and their names (the names are not used much). The token numbers are hardcoded into the Python tokenizer and into pgen. A Python implementation of the Python tokenizer is also available, in the standard tokenize module. On the other hand, symbol numbers (representing the grammar's non-terminals) are assigned by pgen based on the actual grammar input. Note: this module is pretty much obsolete; the pgen module generates equivalent grammar tables directly from the Grammar.txt input file without having to invoke the Python pgen C program. """ # Python imports import re # Local imports from pgen2 import grammar, token class Converter(grammar.Grammar): """Grammar subclass that reads classic pgen output files. The run() method reads the tables as produced by the pgen parser generator, typically contained in two C files, graminit.h and graminit.c. The other methods are for internal use only. See the base class for more documentation. """ def run(self, graminit_h, graminit_c): """Load the grammar tables from the text files written by pgen.""" self.parse_graminit_h(graminit_h) self.parse_graminit_c(graminit_c) self.finish_off() def parse_graminit_h(self, filename): """Parse the .h file writen by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables mapping the numbers to names and back. """ try: f = open(filename) except IOError, err: print "Can't open %s: %s" % (filename, err) return False self.symbol2number = {} self.number2symbol = {} lineno = 0 for line in f: lineno += 1 mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line) if not mo and line.strip(): print "%s(%s): can't parse %s" % (filename, lineno, line.strip()) else: symbol, number = mo.groups() number = int(number) assert symbol not in self.symbol2number assert number not in self.number2symbol self.symbol2number[symbol] = number self.number2symbol[number] = symbol return True def parse_graminit_c(self, filename): """Parse the .c file writen by pgen. (Internal) The file looks as follows. The first two lines are always this: #include "pgenheaders.h" #include "grammar.h" After that come four blocks: 1) one or more state definitions 2) a table defining dfas 3) a table defining labels 4) a struct defining the grammar A state definition has the following form: - one or more arc arrays, each of the form: static arc arcs__[] = { {, }, ... }; - followed by a state array, of the form: static state states_[] = { {, arcs__}, ... }; """ try: f = open(filename) except IOError, err: print "Can't open %s: %s" % (filename, err) return False # The code below essentially uses f's iterator-ness! lineno = 0 # Expect the two #include lines lineno, line = lineno+1, f.next() assert line == '#include "pgenheaders.h"\n', (lineno, line) lineno, line = lineno+1, f.next() assert line == '#include "grammar.h"\n', (lineno, line) # Parse the state definitions lineno, line = lineno+1, f.next() allarcs = {} states = [] while line.startswith("static arc "): while line.startswith("static arc "): mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$", line) assert mo, (lineno, line) n, m, k = map(int, mo.groups()) arcs = [] for _ in range(k): lineno, line = lineno+1, f.next() mo = re.match(r"\s+{(\d+), (\d+)},$", line) assert mo, (lineno, line) i, j = map(int, mo.groups()) arcs.append((i, j)) lineno, line = lineno+1, f.next() assert line == "};\n", (lineno, line) allarcs[(n, m)] = arcs lineno, line = lineno+1, f.next() mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line) assert mo, (lineno, line) s, t = map(int, mo.groups()) assert s == len(states), (lineno, line) state = [] for _ in range(t): lineno, line = lineno+1, f.next() mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line) assert mo, (lineno, line) k, n, m = map(int, mo.groups()) arcs = allarcs[n, m] assert k == len(arcs), (lineno, line) state.append(arcs) states.append(state) lineno, line = lineno+1, f.next() assert line == "};\n", (lineno, line) lineno, line = lineno+1, f.next() self.states = states # Parse the dfas dfas = {} mo = re.match(r"static dfa dfas\[(\d+)\] = {$", line) assert mo, (lineno, line) ndfas = int(mo.group(1)) for i in range(ndfas): lineno, line = lineno+1, f.next() mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$', line) assert mo, (lineno, line) symbol = mo.group(2) number, x, y, z = map(int, mo.group(1, 3, 4, 5)) assert self.symbol2number[symbol] == number, (lineno, line) assert self.number2symbol[number] == symbol, (lineno, line) assert x == 0, (lineno, line) state = states[z] assert y == len(state), (lineno, line) lineno, line = lineno+1, f.next() mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line) assert mo, (lineno, line) first = {} rawbitset = eval(mo.group(1)) for i, c in enumerate(rawbitset): byte = ord(c) for j in range(8): if byte & (1<spgen2/pgen.py000066600000032732150501042300007063 0ustar00# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Pgen imports from . import grammar, token, tokenize class PgenGrammar(grammar.Grammar): pass class ParserGenerator(object): def __init__(self, filename, stream=None): close_stream = None if stream is None: stream = open(filename) close_stream = stream.close self.filename = filename self.stream = stream self.generator = tokenize.generate_tokens(stream.readline) self.gettoken() # Initialize lookahead self.dfas, self.startsymbol = self.parse() if close_stream is not None: close_stream() self.first = {} # map from symbol name to set of tokens self.addfirstsets() def make_grammar(self): c = PgenGrammar() names = self.dfas.keys() names.sort() names.remove(self.startsymbol) names.insert(0, self.startsymbol) for name in names: i = 256 + len(c.symbol2number) c.symbol2number[name] = i c.number2symbol[i] = name for name in names: dfa = self.dfas[name] states = [] for state in dfa: arcs = [] for label, next in state.arcs.iteritems(): arcs.append((self.make_label(c, label), dfa.index(next))) if state.isfinal: arcs.append((0, dfa.index(state))) states.append(arcs) c.states.append(states) c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name)) c.start = c.symbol2number[self.startsymbol] return c def make_first(self, c, name): rawfirst = self.first[name] first = {} for label in rawfirst: ilabel = self.make_label(c, label) ##assert ilabel not in first # XXX failed on <> ... != first[ilabel] = 1 return first def make_label(self, c, label): # XXX Maybe this should be a method on a subclass of converter? ilabel = len(c.labels) if label[0].isalpha(): # Either a symbol name or a named token if label in c.symbol2number: # A symbol name (a non-terminal) if label in c.symbol2label: return c.symbol2label[label] else: c.labels.append((c.symbol2number[label], None)) c.symbol2label[label] = ilabel return ilabel else: # A named token (NAME, NUMBER, STRING) itoken = getattr(token, label, None) assert isinstance(itoken, int), label assert itoken in token.tok_name, label if itoken in c.tokens: return c.tokens[itoken] else: c.labels.append((itoken, None)) c.tokens[itoken] = ilabel return ilabel else: # Either a keyword or an operator assert label[0] in ('"', "'"), label value = eval(label) if value[0].isalpha(): # A keyword if value in c.keywords: return c.keywords[value] else: c.labels.append((token.NAME, value)) c.keywords[value] = ilabel return ilabel else: # An operator (any non-numeric token) itoken = grammar.opmap[value] # Fails if unknown token if itoken in c.tokens: return c.tokens[itoken] else: c.labels.append((itoken, None)) c.tokens[itoken] = ilabel return ilabel def addfirstsets(self): names = self.dfas.keys() names.sort() for name in names: if name not in self.first: self.calcfirst(name) #print name, self.first[name].keys() def calcfirst(self, name): dfa = self.dfas[name] self.first[name] = None # dummy to detect left recursion state = dfa[0] totalset = {} overlapcheck = {} for label, next in state.arcs.iteritems(): if label in self.dfas: if label in self.first: fset = self.first[label] if fset is None: raise ValueError("recursion for rule %r" % name) else: self.calcfirst(label) fset = self.first[label] totalset.update(fset) overlapcheck[label] = fset else: totalset[label] = 1 overlapcheck[label] = {label: 1} inverse = {} for label, itsfirst in overlapcheck.iteritems(): for symbol in itsfirst: if symbol in inverse: raise ValueError("rule %s is ambiguous; %s is in the" " first sets of %s as well as %s" % (name, symbol, label, inverse[symbol])) inverse[symbol] = label self.first[name] = totalset def parse(self): dfas = {} startsymbol = None # MSTART: (NEWLINE | RULE)* ENDMARKER while self.type != token.ENDMARKER: while self.type == token.NEWLINE: self.gettoken() # RULE: NAME ':' RHS NEWLINE name = self.expect(token.NAME) self.expect(token.OP, ":") a, z = self.parse_rhs() self.expect(token.NEWLINE) #self.dump_nfa(name, a, z) dfa = self.make_dfa(a, z) #self.dump_dfa(name, dfa) oldlen = len(dfa) self.simplify_dfa(dfa) newlen = len(dfa) dfas[name] = dfa #print name, oldlen, newlen if startsymbol is None: startsymbol = name return dfas, startsymbol def make_dfa(self, start, finish): # To turn an NFA into a DFA, we define the states of the DFA # to correspond to *sets* of states of the NFA. Then do some # state reduction. Let's represent sets as dicts with 1 for # values. assert isinstance(start, NFAState) assert isinstance(finish, NFAState) def closure(state): base = {} addclosure(state, base) return base def addclosure(state, base): assert isinstance(state, NFAState) if state in base: return base[state] = 1 for label, next in state.arcs: if label is None: addclosure(next, base) states = [DFAState(closure(start), finish)] for state in states: # NB states grows while we're iterating arcs = {} for nfastate in state.nfaset: for label, next in nfastate.arcs: if label is not None: addclosure(next, arcs.setdefault(label, {})) for label, nfaset in arcs.iteritems(): for st in states: if st.nfaset == nfaset: break else: st = DFAState(nfaset, finish) states.append(st) state.addarc(st, label) return states # List of DFAState instances; first one is start def dump_nfa(self, name, start, finish): print "Dump of NFA for", name todo = [start] for i, state in enumerate(todo): print " State", i, state is finish and "(final)" or "" for label, next in state.arcs: if next in todo: j = todo.index(next) else: j = len(todo) todo.append(next) if label is None: print " -> %d" % j else: print " %s -> %d" % (label, j) def dump_dfa(self, name, dfa): print "Dump of DFA for", name for i, state in enumerate(dfa): print " State", i, state.isfinal and "(final)" or "" for label, next in state.arcs.iteritems(): print " %s -> %d" % (label, dfa.index(next)) def simplify_dfa(self, dfa): # This is not theoretically optimal, but works well enough. # Algorithm: repeatedly look for two states that have the same # set of arcs (same labels pointing to the same nodes) and # unify them, until things stop changing. # dfa is a list of DFAState instances changes = True while changes: changes = False for i, state_i in enumerate(dfa): for j in range(i+1, len(dfa)): state_j = dfa[j] if state_i == state_j: #print " unify", i, j del dfa[j] for state in dfa: state.unifystate(state_j, state_i) changes = True break def parse_rhs(self): # RHS: ALT ('|' ALT)* a, z = self.parse_alt() if self.value != "|": return a, z else: aa = NFAState() zz = NFAState() aa.addarc(a) z.addarc(zz) while self.value == "|": self.gettoken() a, z = self.parse_alt() aa.addarc(a) z.addarc(zz) return aa, zz def parse_alt(self): # ALT: ITEM+ a, b = self.parse_item() while (self.value in ("(", "[") or self.type in (token.NAME, token.STRING)): c, d = self.parse_item() b.addarc(c) b = d return a, b def parse_item(self): # ITEM: '[' RHS ']' | ATOM ['+' | '*'] if self.value == "[": self.gettoken() a, z = self.parse_rhs() self.expect(token.OP, "]") a.addarc(z) return a, z else: a, z = self.parse_atom() value = self.value if value not in ("+", "*"): return a, z self.gettoken() z.addarc(a) if value == "+": return a, z else: return a, a def parse_atom(self): # ATOM: '(' RHS ')' | NAME | STRING if self.value == "(": self.gettoken() a, z = self.parse_rhs() self.expect(token.OP, ")") return a, z elif self.type in (token.NAME, token.STRING): a = NFAState() z = NFAState() a.addarc(z, self.value) self.gettoken() return a, z else: self.raise_error("expected (...) or NAME or STRING, got %s/%s", self.type, self.value) def expect(self, type, value=None): if self.type != type or (value is not None and self.value != value): self.raise_error("expected %s/%s, got %s/%s", type, value, self.type, self.value) value = self.value self.gettoken() return value def gettoken(self): tup = self.generator.next() while tup[0] in (tokenize.COMMENT, tokenize.NL): tup = self.generator.next() self.type, self.value, self.begin, self.end, self.line = tup #print token.tok_name[self.type], repr(self.value) def raise_error(self, msg, *args): if args: try: msg = msg % args except: msg = " ".join([msg] + map(str, args)) raise SyntaxError(msg, (self.filename, self.end[0], self.end[1], self.line)) class NFAState(object): def __init__(self): self.arcs = [] # list of (label, NFAState) pairs def addarc(self, next, label=None): assert label is None or isinstance(label, str) assert isinstance(next, NFAState) self.arcs.append((label, next)) class DFAState(object): def __init__(self, nfaset, final): assert isinstance(nfaset, dict) assert isinstance(iter(nfaset).next(), NFAState) assert isinstance(final, NFAState) self.nfaset = nfaset self.isfinal = final in nfaset self.arcs = {} # map from label to DFAState def addarc(self, next, label): assert isinstance(label, str) assert label not in self.arcs assert isinstance(next, DFAState) self.arcs[label] = next def unifystate(self, old, new): for label, next in self.arcs.iteritems(): if next is old: self.arcs[label] = new def __eq__(self, other): # Equality test -- ignore the nfaset instance variable assert isinstance(other, DFAState) if self.isfinal != other.isfinal: return False # Can't just return self.arcs == other.arcs, because that # would invoke this method recursively, with cycles... if len(self.arcs) != len(other.arcs): return False for label, next in self.arcs.iteritems(): if next is not other.arcs.get(label): return False return True __hash__ = None # For Py3 compatibility. def generate_grammar(filename="Grammar.txt"): p = ParserGenerator(filename) return p.make_grammar() pgen2/pgen.pyo000066600000026714150501042300007245 0ustar00 Lc@sddklZlZlZdeifdYZdefdYZdefdYZdefd YZ d d Z d S( i(tgrammarttokenttokenizet PgenGrammarcBseZRS((t__name__t __module__(((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRstParserGeneratorcBseZddZdZdZdZdZdZdZ dZ dZ d Z d Z d Zd Zd ZdZddZdZdZRS(cCsd}|djot|}|i}n||_||_ti|i|_|i |i \|_ |_ |dj o |nh|_ |idS(N(tNonetopentclosetfilenametstreamRtgenerate_tokenstreadlinet generatortgettokentparsetdfast startsymboltfirstt addfirstsets(tselfR R t close_stream((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyt__init__ s         c Cst}|ii}|i|i|i|id|ix;|D]3}dt|i}||i|<||i |tupdate( RR*R,R-ttotalsett overlapcheckR.R/tfsettinversetitsfirsttsymbol((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR>ss8            c Csh}d}x|itijox"|itijo|iq%W|iti}|itid|i \}}|iti|i ||}t |}|i |t |}|||<|djo |}qqW||fS(Nt:( RttypeRt ENDMARKERtNEWLINERtexpectR:tOPt parse_rhstmake_dfaRt simplify_dfa( RRRR*tatzR,toldlentnewlen((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRs&     c sfd}fdt|||g}x|D]}h}xU|iD]J}xA|iD]6\}} |dj o| |i|hqcqcWqSWxl|iD]^\}} x?|D]} | i| joPqqWt| |} |i| |i| |qWq=W|S(Ncsh}|||S(N((R-tbase(t addclosure(s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pytclosures csX||jodSd|| %ds %s -> %d(t enumerateRR#RR!R( RR*R'R[ttodoR+R-R.R/tj((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pytdump_nfas   !      cCsdG|GHxpt|D]b\}}dG|G|iodpdGHx5|iiD]$\}}d||i|fGHqPWqWdS(NsDump of DFA fors States(final)R^s %s -> %d(R_R$RR R#(RR*R,R+R-R.R/((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pytdump_dfas   cCst}x|ot}xt|D]z\}}xkt|dt|D]P}||}||jo3||=x|D]}|i||qtWt}PqIqIWq#Wq WdS(Ni(tTruetFalseR_trangeRt unifystate(RR,tchangesR+tstate_iRatstate_jR-((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyROs    cCs|i\}}|idjo ||fSt}t}|i||i|xK|idjo:|i|i\}}|i||i|q\W||fSdS(Nt|(t parse_altR=tNFAStateRZR(RRPRQtaatzz((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRMs       cCsu|i\}}xV|idjp|ititifjo)|i\}}|i||}qW||fS(Nt(t[(RpRq(t parse_itemR=RHRR:tSTRINGRZ(RRPtbR(td((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRl s  cCs|idjoG|i|i\}}|itid|i|||fS|i\}}|i}|djo ||fS|i|i||djo ||fS||fSdS(NRqt]t+t*(RwRx(R=RRMRKRRLRZt parse_atom(RRPRQR=((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRrs          cCs|idjo:|i|i\}}|itid||fS|ititifjo:t }t }|i ||i|i||fS|i d|i|idS(NRpt)s+expected (...) or NAME or STRING, got %s/%s( R=RRMRKRRLRHR:RsRmRZt raise_error(RRPRQ((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRy(s       cCsg|i|jp|dj o3|i|jo#|id|||i|in|i}|i|S(Nsexpected %s/%s, got %s/%s(RHRR=R{R(RRHR=((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRK9s -   cCsk|ii}x1|dtitifjo|ii}qW|\|_|_|_|_|_ dS(Ni( RR/RtCOMMENTtNLRHR=tbegintendtline(Rttup((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRAs c Gsu|o;y||}WqBdi|gtt|}qBXnt||i|id|id|ifdS(Nt ii(tjointmaptstrt SyntaxErrorR RR(Rtmsgtargs((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR{Hs'N(RRRRR0R&R"RR>RRNRbRcRORMRlRrRyRKRR{(((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR s$   .    $        RmcBseZdZddZRS(cCs g|_dS(N(R(R((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRSscCs|ii||fdS(N(RR!(RR/R.((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRZVsN(RRRRRZ(((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyRmQs RWcBs2eZdZdZdZdZdZRS(cCs%||_||j|_h|_dS(N(RXR$R(RRXtfinal((s*/usr/lib64/python2.6/lib2to3/pgen2/pgen.pyR]s cCs||i|s H %pgen2/grammar.pyc000066600000013617150501042300007724 0ustar00 Lc@sdZddkZddklZlZdefdYZdZhZxDei D]6Z e o)e i \Z Z eee ee GREATER = EQUAL . DOT % PERCENT ` BACKQUOTE { LBRACE } RBRACE @ AT == EQEQUAL != NOTEQUAL <> NOTEQUAL <= LESSEQUAL >= GREATEREQUAL ~ TILDE ^ CIRCUMFLEX << LEFTSHIFT >> RIGHTSHIFT ** DOUBLESTAR += PLUSEQUAL -= MINEQUAL *= STAREQUAL /= SLASHEQUAL %= PERCENTEQUAL &= AMPEREQUAL |= VBAREQUAL ^= CIRCUMFLEXEQUAL <<= LEFTSHIFTEQUAL >>= RIGHTSHIFTEQUAL **= DOUBLESTAREQUAL // DOUBLESLASH //= DOUBLESLASHEQUAL -> RARROW (R'RtRRtobjectRt opmap_rawtopmapt splitlinestlinetsplittoptnameR(((s-/usr/lib64/python2.6/lib2to3/pgen2/grammar.pyt s  pgen2/literals.pyo000066600000003414150501042300010123 0ustar00 Lc@sdZddkZh dd6dd6dd6d d 6d d 6d d6dd6dd6dd6dd6ZdZdZdZedjo endS(s<Safely evaluate Python string literals without using eval().iNstastbs tfs tns trs tts tvt't"s\cCs|idd\}}ti|}|dj o|S|idon|d}t|djotd|nyt|d}Wqtj otd|qXn:yt|d}Wn#tj otd|nXt|S( Niitxis!invalid hex string escape ('\%s')iis#invalid octal string escape ('\%s')( tgrouptsimple_escapestgettNonet startswithtlent ValueErrortinttchr(tmtallttailtescthexesti((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pytescapes"  cCsZ|d}|d |djo|d}n|t|t| !}tidt|S(Niis)\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})(RtretsubR(tstq((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pyt evalString(s  cCsaxZtdD]L}t|}t|}t|}||jo|G|G|G|GHq q WdS(Ni(trangeRtreprR(RtcRte((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pyttest2s     t__main__(t__doc__RR RRR#t__name__(((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pyts      pgen2/grammar.pyo000066600000013617150501042300007740 0ustar00 Lc@sdZddkZddklZlZdefdYZdZhZxDei D]6Z e o)e i \Z Z eee ee GREATER = EQUAL . DOT % PERCENT ` BACKQUOTE { LBRACE } RBRACE @ AT == EQEQUAL != NOTEQUAL <> NOTEQUAL <= LESSEQUAL >= GREATEREQUAL ~ TILDE ^ CIRCUMFLEX << LEFTSHIFT >> RIGHTSHIFT ** DOUBLESTAR += PLUSEQUAL -= MINEQUAL *= STAREQUAL /= SLASHEQUAL %= PERCENTEQUAL &= AMPEREQUAL |= VBAREQUAL ^= CIRCUMFLEXEQUAL <<= LEFTSHIFTEQUAL >>= RIGHTSHIFTEQUAL **= DOUBLESTAREQUAL // DOUBLESLASH //= DOUBLESLASHEQUAL -> RARROW (R'RtRRtobjectRt opmap_rawtopmapt splitlinestlinetsplittoptnameR(((s-/usr/lib64/python2.6/lib2to3/pgen2/grammar.pyt s  pgen2/literals.pyc000066600000003767150501042300010122 0ustar00 Lc@sdZddkZh dd6dd6dd6d d 6d d 6d d6dd6dd6dd6dd6ZdZdZdZedjo endS(s<Safely evaluate Python string literals without using eval().iNstastbs tfs tns trs tts tvt't"s\cCs|idd\}}|idptti|}|dj o|S|idon|d}t|djotd|nyt|d}Wqtj otd|qXn:yt|d}Wn#tj otd |nXt |S( Niis\txis!invalid hex string escape ('\%s')iis#invalid octal string escape ('\%s')( tgroupt startswithtAssertionErrortsimple_escapestgettNonetlent ValueErrortinttchr(tmtallttailtescthexesti((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pytescapes$  cCs|idp$|idptt|d |d}|d |djo|d}n|i|ptt|t| t|dt|jpt|t|t| !}tidt|S(NRRiiiis)\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})(R R treprtendswithRtretsubR(tstq((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pyt evalString(s4 +$cCsaxZtdD]L}t|}t|}t|}||jo|G|G|G|GHq q WdS(Ni(trangeRRR!(RtcRte((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pyttest2s     t__main__(t__doc__RR RR!R%t__name__(((s./usr/lib64/python2.6/lib2to3/pgen2/literals.pyts      pgen2/__init__.pyc000066600000000256150501042300010030 0ustar00 Lc@s dZdS(sThe pgen2 package.N(t__doc__(((s./usr/lib64/python2.6/lib2to3/pgen2/__init__.pytsGrammar.txt000066600000014610150501042300006667 0ustar00# Grammar for Python # Note: Changing the grammar specified in this file will most likely # require corresponding changes in the parser module # (../Modules/parsermodule.c). If you can't make the changes to # that module yourself, please co-ordinate the required changes # with someone who can; ask around on python-dev for help. Fred # Drake will probably be listening there. # NOTE WELL: You should also follow all the steps listed in PEP 306, # "How to Change Python's Grammar" # Commands for Kees Blom's railroad program #diagram:token NAME #diagram:token NUMBER #diagram:token STRING #diagram:token NEWLINE #diagram:token ENDMARKER #diagram:token INDENT #diagram:output\input python.bla #diagram:token DEDENT #diagram:output\textwidth 20.04cm\oddsidemargin 0.0cm\evensidemargin 0.0cm #diagram:rules # Start symbols for the grammar: # file_input is a module or sequence of commands read from an input file; # single_input is a single interactive statement; # eval_input is the input for the eval() and input() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! file_input: (NEWLINE | stmt)* ENDMARKER single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef) funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: ((tfpdef ['=' test] ',')* ('*' [tname] (',' tname ['=' test])* [',' '**' tname] | '**' tname) | tfpdef ['=' test] (',' tfpdef ['=' test])* [',']) tname: NAME [':' test] tfpdef: tname | '(' tfplist ')' tfplist: tfpdef (',' tfpdef)* [','] varargslist: ((vfpdef ['=' test] ',')* ('*' [vname] (',' vname ['=' test])* [',' '**' vname] | '**' vname) | vfpdef ['=' test] (',' vfpdef ['=' test])* [',']) vname: NAME vfpdef: vname | '(' vfplist ')' vfplist: vfpdef (',' vfpdef)* [','] stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | exec_stmt | assert_stmt) expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter print_stmt: 'print' ( [ test (',' test)* [','] ] | '>>' test [ (',' test)+ [','] ] ) del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names import_from: ('from' ('.'* dotted_name | '.'+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: ('global' | 'nonlocal') NAME (',' NAME)* exec_stmt: 'exec' expr ['in' test [',' test]] assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] with_var: 'as' expr # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test [(',' | 'as') test]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT # Backward compatibility cruft to support: # [ x for x in lambda: True, lambda: False if x() ] # even while also allowing: # lambda x: 5 if x else 2 # (But not a mix of the two) testlist_safe: old_test [(',' old_test)+ [',']] old_test: or_test | old_lambdef old_lambdef: 'lambda' [varargslist] ':' old_test test: or_test ['if' or_test 'else' test] | lambdef or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] atom: ('(' [yield_expr|testlist_gexp] ')' | '[' [listmaker] ']' | '{' [dictsetmaker] '}' | '`' testlist1 '`' | NAME | NUMBER | STRING+ | '.' '.' '.') listmaker: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] ) testlist_gexp: test ( comp_for | (',' (test|star_expr))* [','] ) lambdef: 'lambda' [varargslist] ':' test trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) | (test (comp_for | (',' test)* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: (argument ',')* (argument [','] |'*' test (',' argument)* [',' '**' test] |'**' test) argument: test [comp_for] | test '=' test # Really [keyword '='] test comp_iter: comp_for | comp_if comp_for: 'for' exprlist 'in' testlist_safe [comp_iter] comp_if: 'if' old_test [comp_iter] testlist1: test (',' test)* # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [testlist] refactor.pyo000066600000054170150501042300007103 0ustar00 Lc@sdZddklZdZddkZddkZddkZddkZddkZddk Z ddk l Z ddk l Z lZlZddklZlZed Zd efd YZd Zd ZdZdZeiddfjo+ddkZeiZdZdZ neZeZeZ dZ!defdYZ"de#fdYZ$defdYZ%de$fdYZ&dS(sRefactoring framework. Used as a main program, this can refactor any number of files and/or recursively descend down directories. Imported as a module, this provides infrastructure to write your own refactoring tool. i(twith_statements#Guido van Rossum N(tchaini(tdriverttokenizettoken(tpytreetpygramcCst|ggdg}tii|i}g}xgtti|D]P}|ido:|ido*|o|d}n|i |d qIqIW|S(sEReturn a sorted list of all available fix names in the given package.t*tfix_s.pyii( t __import__tostpathtdirnamet__file__tsortedtlistdirt startswithtendswithtappend(t fixer_pkgt remove_prefixtpkgt fixer_dirt fix_namestname((s(/usr/lib64/python2.6/lib2to3/refactor.pytget_all_fix_namess t _EveryNodecBseZRS((t__name__t __module__(((s(/usr/lib64/python2.6/lib2to3/refactor.pyR+scCst|titifo+|idjo tnt|igSt|tio"|i ot |i Stnt|ti oFt}x5|i D]*}x!|D]}|i t |qWqW|St d|dS(sf Accepts a pytree Pattern Node and returns a set of the pattern types which will match first. s$Oh no! I don't understand pattern %sN(t isinstanceRt NodePatternt LeafPatternttypetNoneRtsettNegatedPatterntcontentt_get_head_typestWildcardPatterntupdatet Exception(tpattrtptx((s(/usr/lib64/python2.6/lib2to3/refactor.pyR%/s"     cCstit}g}x|D]}|io\yt|i}Wn tj o|i|qXxX|D]}||i|qiWq|idj o||ii|q|i|qWx:t t i i i t i iD]}||i|qWt|S(s^ Accepts a list of fixers and returns a dictionary of head node type --> fixer list. N(t collectionst defaultdicttlisttpatternR%RRt _accept_typeR!RRtpython_grammart symbol2numbert itervaluesttokenstextendtdict(t fixer_listt head_nodesteverytfixertheadst node_type((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_get_headnode_dictKs(  cCs0g}t|tD]}||d|q~S(sN Return the fully qualified names for fixers in the package pkg_name. t.(RtFalse(tpkg_namet_[1]tfix_name((s(/usr/lib64/python2.6/lib2to3/refactor.pytget_fixers_from_packagedscCs|S(N((tobj((s(/usr/lib64/python2.6/lib2to3/refactor.pyt _identityksiicCs|iddS(Nu u (treplace(tinput((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_from_system_newlinesrscCs,tidjo|idtiS|SdS(Ns u (R tlinesepRG(RH((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_to_system_newlinestscst}titi|ifd}ttititi f}t }yx}t ou|\}}||joq]q]|ti jo|oPnt }q]|ti jo|djo |\}}|ti jp |djoPn|\}}|ti jp |djoPn|\}}|tijo |djo|\}}nxg|ti joQ|i||\}}|tijp |djoPn|\}}qnWq]Pq]WWntj onXt|S(Ncsi}|d|dfS(Nii(tnext(ttok(tgen(s(/usr/lib64/python2.6/lib2to3/refactor.pytadvances ufromu __future__uimportu(u,(R@Rtgenerate_tokenstStringIOtreadlinet frozensetRtNEWLINEtNLtCOMMENTR"tTruetSTRINGtNAMEtOPtaddt StopIteration(tsourcethave_docstringROtignoretfeaturesttptvalue((RNs(/usr/lib64/python2.6/lib2to3/refactor.pyt_detect_future_featuressH     t FixerErrorcBseZdZRS(sA fixer could not be loaded.(RRt__doc__(((s(/usr/lib64/python2.6/lib2to3/refactor.pyRdstRefactoringToolcBseZhed6ZdZdZdddZdZdZ dZ dZ dZ eed Z eed Zd Zeed Zd ZedZdZdZdeddZddZdZdZdZdZdZdZdZdZRS(tprint_functiontFixRcCs||_|pg|_|ii|_|dj o|ii|n|idoti|_ n ti |_ g|_ t i d|_g|_t|_ti|i dtid|i|_|i\|_|_t|i|_t|i|_g|_dS(sInitializer. Args: fixer_names: a list of fixers to import options: an dict with configuration. explicit: a list of fixers to run even if they are explicit. RgRftconverttloggerN(tfixerstexplicitt_default_optionstcopytoptionsR!R'Rt!python_grammar_no_print_statementtgrammarR2terrorstloggingt getLoggerRjt fixer_logR@twroteRtDriverRRit get_fixerst pre_ordert post_orderR>tpre_order_headstpost_order_headstfiles(tselft fixer_namesRoRl((s(/usr/lib64/python2.6/lib2to3/refactor.pyt__init__s&       c Csg}g}x|iD]}t|hhdg}|iddd}|i|io|t|i}n|id}|idig}|D]}||i q~} yt || } Wn)t j ot d|| fnX| |i |i} | io7|itj o'||ijo|id|qn|id || id jo|i| q| id jo|i| qt d | iqWtid } |id| |id| ||fS(sInspects the options to load the requested patterns and handlers. Returns: (pre_order, post_order), where pre_order is the list of fixers that want a pre-order AST traversal, and post_order is the list that want post-order traversal. RR?iit_tsCan't find %s.%ssSkipping implicit fixer: %ssAdding transformation: %stpretpostsIllegal fixer order: %rt run_ordertkey(RkR trsplitRt FILE_PREFIXtlentsplitt CLASS_PREFIXtjointtitletgetattrtAttributeErrorRdRoRuRlRWt log_messaget log_debugtorderRtoperatort attrgettertsort( R~tpre_order_fixerstpost_order_fixerst fix_mod_pathtmodRCtpartsRBR+t class_namet fix_classR;tkey_func((s(/usr/lib64/python2.6/lib2to3/refactor.pyRxs: 7cOsdS(sCalled when an error occurs.N((R~tmsgtargstkwds((s(/usr/lib64/python2.6/lib2to3/refactor.pyt log_errorscGs)|o||}n|ii|dS(sHook to log a message.N(Rjtinfo(R~RR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRscGs)|o||}n|ii|dS(N(Rjtdebug(R~RR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRscCsdS(sTCalled with the old version, new version, and filename of a refactored file.N((R~told_texttnew_texttfilenametequal((s(/usr/lib64/python2.6/lib2to3/refactor.pyt print_outputscCsRxK|D]C}tii|o|i|||q|i|||qWdS(s)Refactor a list of files and directories.N(R R tisdirt refactor_dirt refactor_file(R~titemstwritet doctests_onlyt dir_or_file((s(/usr/lib64/python2.6/lib2to3/refactor.pytrefactor s c Csxti|D]\}}}|id||i|ixk|D]c}|id oLtii|dido,tii||}|i |||qJqJWg} |D]!} | idp | | qq~ |(qWdS(sDescends down a directory and refactor every Python file found. Python files are assumed to have a .py extension. Files and subdirectories starting with '.' are skipped. sDescending into %sR?itpyN( R twalkRRRR tsplitextRRR( R~tdir_nameRRtdirpathtdirnamest filenamesRtfullnameRBtdn((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs   c Csyt|d}Wn)tj o}|id||dSXzti|id}Wd|iXt|dd|i i }z#|~}t |i |fSWdQXdS(sG Do our best to decode a Python source file correctly. trbsCan't open %s: %siNR*tencoding(NN( topentIOErrorRR!Rtdetect_encodingRRtcloset_open_with_encodingt__exit__t __enter__RItread(R~RtfterrRRB((s(/usr/lib64/python2.6/lib2to3/refactor.pyt_read_python_source(s ,cCs|i|\}}|djodS|d7}|o`|id||i||}||jo|i|||||q|id|n^|i||}|o4|io*|it|d |d|d|n|id|dS( sRefactors a file.Nu sRefactoring doctests in %ssNo doctest changes in %siRRsNo changes in %s(RR!Rtrefactor_docstringtprocessed_filetrefactor_stringt was_changedtunicode(R~RRRRHRtoutputttree((s(/usr/lib64/python2.6/lib2to3/refactor.pyR8s   c Cst|}d|joti|i_nzOy|ii|}Wn2tj o&}|id||ii |dSXWd|i|i_X||_ |i d||i |||S(sFRefactor a given input string. Args: data: a string holding the code to be refactored. name: a human-readable name for use in error/log messages. Returns: An AST corresponding to the refactored input stream; None if there were errors during the parse. RgsCan't parse %s: %s: %sNsRefactoring %s( RcRRpRRqt parse_stringR(Rt __class__Rtfuture_featuresRt refactor_tree(R~tdataRR`RR((s(/usr/lib64/python2.6/lib2to3/refactor.pyROs     cCstii}|oT|id|i|d}||jo|i|d|q|idnN|i|d}|o'|io|it|d|n|iddS(NsRefactoring doctests in stdinssNo doctest changes in stdinsNo changes in stdin( tsyststdinRRRRRRR(R~RRHRR((s(/usr/lib64/python2.6/lib2to3/refactor.pytrefactor_stdinjs  cCsx-t|i|iD]}|i||qW|i|i|i|i|i|ix-t|i|iD]}|i||qxW|iS(sARefactors a parse tree (modifying the tree in place). Args: tree: a pytree.Node instance representing the root of the tree to be refactored. name: a human-readable name for this tree. Returns: True if the tree was modified, False otherwise. ( RRyRzt start_treet traverse_byR{R|t finish_treeR(R~RRR;((s(/usr/lib64/python2.6/lib2to3/refactor.pyRzs cCs|pdSxv|D]n}xe||iD]V}|i|}|o:|i||}|dj o|i||}q}q'q'WqWdS(sTraverse an AST, applying a set of fixers to each node. This is a helper method for refactor_tree(). Args: fixers: a list of fixer instances. traversal: a generator that yields AST nodes. Returns: None N(R tmatcht transformR!RG(R~Rkt traversaltnodeR;tresultstnew((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs   cCs|ii||djo)|i|d}|djodSn||j}|i|||||o|id|dS|o|i||||n|id|dS(sP Called when a file has been refactored, and there are changes. iNsNo changes to %ssNot writing changes to %s(R}RR!RRRt write_file(R~RRRRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs    c Csyt|dd|}Wn,tij o}|id||dSXzIy|it|Wn+tij o}|id||nXWd|iX|id|t|_ dS(sWrites a string to a file. It first shows a unified diff between the old text and the new text, and then rewrites the file; the latter is only done if the write option is set. twRsCan't create %s: %sNsCan't write %s: %ssWrote changes to %s( RR terrorRRRKRRRWRv(R~RRRRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs s>>> s... c Csg}d}d}d}d}x5|itD]$}|d7}|ii|io_|dj o#|i|i||||n|}|g}|i|i} || }q.|dj oF|i||i p|||i i djo|i |q.|dj o#|i|i||||nd}d}|i |q.W|dj o#|i|i||||ndi |S(sRefactors a docstring, looking for doctests. This returns a modified version of the input string. It looks for doctests, which start with a ">>>" prompt, and may be continued with "..." prompts, as long as the "..." is indented the same as the ">>>". (Unfortunately we can't use the doctest module's parser, since, like most parsers, it is not geared towards preserving the original source.) iiu uN( R!t splitlinesRWtlstripRtPS1R6trefactor_doctesttfindtPS2trstripRR( R~RHRtresulttblockt block_linenotindenttlinenotlineti((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs<       c Csby|i|||}Wnytj om}|iitio.x+|D]}|id|idqIWn|id|||i i ||SX|i ||ot |i t}||d ||d} }|didp|dcd7>>" (possibly indented), while the remaining lines start with "..." (identically indented). s Source: %su s+Can't parse docstring in %s line %s: %s: %siii(t parse_blockR(tlogt isEnabledForRstDEBUGRRRRRRRRRWRRtpopR( R~RRRRRRRRtclippedRB((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs&! 8c Cs'|io d}nd}|ip|id|n2|id|x|iD]}|i|qRW|io2|idx"|iD]}|i|qWn|iott|idjo|idn|idt|ix1|iD]"\}}}|i|||qWndS( Ntweres need to besNo files %s modified.sFiles that %s modified:s$Warnings/messages while refactoring:isThere was 1 error:sThere were %d errors:(RvR}RRuRrR(R~RtfiletmessageRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyt summarizes*         cCs1|ii|i|||}t|_|S(sParses a block into a tree. This is necessary to get correct line number / offset information in the parser diagnostics and embedded into the parse tree. (Rt parse_tokenst wrap_toksRSR(R~RRRR((s(/usr/lib64/python2.6/lib2to3/refactor.pyR1s! c csti|i||i}xe|D]]\}}\}}\} } } ||d7}| |d7} ||||f| | f| fVq%WdS(s;Wraps a tokenize stream to systematically modify start/end.iN(RRPt gen_linesRL( R~RRRR5R Rbtline0tcol0tline1tcol1t line_text((s(/usr/lib64/python2.6/lib2to3/refactor.pyR;s !ccs||i}||i}|}xm|D]e}|i|o|t|Vn7||idjo dVntd||f|}q'Wxto dVqWdS(sGenerates lines as expected by tokenize from a list of lines. This strips the first len(indent + self.PS1) characters off each line. u sline=%r, prefix=%rRN(RRRRRtAssertionErrorRW(R~RRtprefix1tprefix2tprefixR((s(/usr/lib64/python2.6/lib2to3/refactor.pyRIs    N(RRR@RmRRR!RRxRRRRRRRRRRRRRRRRRRRRRR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyRfs8  (            +   tMultiprocessingUnsupportedcBseZRS((RR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyR]stMultiprocessRefactoringToolcBs5eZdZeeddZdZdZRS(cOs/tt|i||d|_d|_dS(N(tsuperRRR!tqueuet output_lock(R~Rtkwargs((s(/usr/lib64/python2.6/lib2to3/refactor.pyRcs ic Csv|djott|i|||Syddk}Wntj o tnX|idj otdn|i |_|i |_ g}t |D]}||i d|iq~}z;x|D]} | iqWtt|i|||Wd|iix$t |D]}|iidq"Wx)|D]!} | io| iqCqCWd|_XdS(Niis already doing multiple processesttarget(RRRtmultiprocessingt ImportErrorRR R!t RuntimeErrort JoinableQueuetLockR txrangetProcesst_childtstartRtputtis_alive( R~RRRt num_processesR RBRt processesR+((s(/usr/lib64/python2.6/lib2to3/refactor.pyRhs8  /    cCsq|ii}x[|dj oM|\}}ztt|i||Wd|iiX|ii}qWdS(N(R tgetR!RRRt task_done(R~ttaskRR ((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs  cOsG|idj o|ii||fntt|i||SdS(N(R R!RRRR(R~RR ((s(/usr/lib64/python2.6/lib2to3/refactor.pyRs(RRRR@RRR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyRas    ('Ret __future__Rt __author__R RRsRR-RQt itertoolsRtpgen2RRRRRRRWRR(RR%R>RDRFt version_infotcodecsRRRIRKRcRdtobjectRfRR(((s(/usr/lib64/python2.6/lib2to3/refactor.pyt s>                (__init__.py000066600000000007150501042300006644 0ustar00#empty main.pyc000066600000014745150501042300006212 0ustar00 Lc @sdZddklZddkZddkZddkZddkZddkZddkZddk l Z dZ de i fdYZ d Zdd ZdS( s Main program for 2to3. i(twith_statementNi(trefactorc Cs:|i}|i}ti||||ddddS(s%Return a unified diff of two strings.s (original)s (refactored)tlinetermt(t splitlinestdifflibt unified_diff(tatbtfilename((s$/usr/lib64/python2.6/lib2to3/main.pyt diff_textss    tStdoutRefactoringToolcBs2eZdZdZdZdZdZRS(s" Prints output to stdout. cCs2||_||_tt|i|||dS(N(t nobackupst show_diffstsuperR t__init__(tselftfixerstoptionstexplicitR R ((s$/usr/lib64/python2.6/lib2to3/main.pyRs  cOs3|ii|||f|ii|||dS(N(terrorstappendtloggerterror(Rtmsgtargstkwargs((s$/usr/lib64/python2.6/lib2to3/main.pyt log_error$sc Cs|ip|d}tii|o@yti|Wqgtij o}|id|qgXnyti||Wqtij o}|id||qXntt |i }||||||ipt i ||ndS(Ns.baksCan't remove backup %ssCan't rename %s to %s( R tostpathtlexiststremoveRt log_messagetrenameRR t write_filetshutiltcopymode(Rtnew_textR told_texttencodingtbackupterrtwrite((s$/usr/lib64/python2.6/lib2to3/main.pyR"(s   c Cs|o|id|n|id||iot|||}yl|idj oB|iiiz'x|D] }|GHqvWtii WdQXnx|D] }|GHqWWqt j ot d|fdSXndS(NsNo changes to %ss Refactored %ss+couldn't encode %s's diff for your terminal( R R R t output_locktNonet__exit__t __enter__tsyststdouttflushtUnicodeEncodeErrortwarn(RtoldtnewR tequalt diff_linestline((s$/usr/lib64/python2.6/lib2to3/main.pyt print_output;s&    (t__name__t __module__t__doc__RRR"R9(((s$/usr/lib64/python2.6/lib2to3/main.pyR s    cCstid|fIJdS(Ns WARNING: %s(R/tstderr(R((s$/usr/lib64/python2.6/lib2to3/main.pyR3Qsc s)tidd}|idddddd|id d dd d gdd |iddddd ddddd|idddd d gdd|idddddd|idddddd|idddddd |id!dddd"|id#d$dddd%|id&d'ddd tdd(t}h}|i|\}}|i o|iotd)n|i o|io|i d*n|i o4d+GHxt i D] }|GHqW|pd,Sn|pt id-IJt id.IJd/Sd0|jo&t}|iot id1IJd/Sn|iot|d2s talls.fix_s+Sorry, -j isn't supported on this platform.((toptparset OptionParsert add_optiontFalset parse_argsR*tno_diffsR3R Rt list_fixesRtget_all_fix_namesR/R=tTrueRGtverbosetloggingtDEBUGtINFOt basicConfigtsettget_fixers_from_packagetnofixRKtaddtuniont differenceR tsortedRtrefactor_stdint doctests_onlyt processestMultiprocessingUnsupportedtAssertionErrort summarizeREtbool(RLRtparserRctflagsRtfixnameRIt avail_fixestunwanted_fixesRt all_presentRKt requestedt fixer_namestrt((RLs$/usr/lib64/python2.6/lib2to3/main.pytmainUs                 !  (R<t __future__RR/RRRXR#RNRRR tMultiprocessRefactoringToolR R3R,Rs(((s$/usr/lib64/python2.6/lib2to3/main.pyts       7 fixer_util.py000066600000033621150501042300007267 0ustar00"""Utility functions, node construction macros, etc.""" # Author: Collin Winter # Local imports from .pgen2 import token from .pytree import Leaf, Node from .pygram import python_symbols as syms from . import patcomp ########################################################### ### Common node-construction "macros" ########################################################### def KeywordArg(keyword, value): return Node(syms.argument, [keyword, Leaf(token.EQUAL, u'='), value]) def LParen(): return Leaf(token.LPAR, u"(") def RParen(): return Leaf(token.RPAR, u")") def Assign(target, source): """Build an assignment statement""" if not isinstance(target, list): target = [target] if not isinstance(source, list): source.prefix = u" " source = [source] return Node(syms.atom, target + [Leaf(token.EQUAL, u"=", prefix=u" ")] + source) def Name(name, prefix=None): """Return a NAME leaf""" return Leaf(token.NAME, name, prefix=prefix) def Attr(obj, attr): """A node tuple for obj.attr""" return [obj, Node(syms.trailer, [Dot(), attr])] def Comma(): """A comma leaf""" return Leaf(token.COMMA, u",") def Dot(): """A period (.) leaf""" return Leaf(token.DOT, u".") def ArgList(args, lparen=LParen(), rparen=RParen()): """A parenthesised argument list, used by Call()""" node = Node(syms.trailer, [lparen.clone(), rparen.clone()]) if args: node.insert_child(1, Node(syms.arglist, args)) return node def Call(func_name, args=None, prefix=None): """A function call""" node = Node(syms.power, [func_name, ArgList(args)]) if prefix is not None: node.prefix = prefix return node def Newline(): """A newline literal""" return Leaf(token.NEWLINE, u"\n") def BlankLine(): """A blank line""" return Leaf(token.NEWLINE, u"") def Number(n, prefix=None): return Leaf(token.NUMBER, n, prefix=prefix) def Subscript(index_node): """A numeric or string subscript""" return Node(syms.trailer, [Leaf(token.LBRACE, u'['), index_node, Leaf(token.RBRACE, u']')]) def String(string, prefix=None): """A string leaf""" return Leaf(token.STRING, string, prefix=prefix) def ListComp(xp, fp, it, test=None): """A list comprehension of the form [xp for fp in it if test]. If test is None, the "if test" part is omitted. """ xp.prefix = u"" fp.prefix = u" " it.prefix = u" " for_leaf = Leaf(token.NAME, u"for") for_leaf.prefix = u" " in_leaf = Leaf(token.NAME, u"in") in_leaf.prefix = u" " inner_args = [for_leaf, fp, in_leaf, it] if test: test.prefix = u" " if_leaf = Leaf(token.NAME, u"if") if_leaf.prefix = u" " inner_args.append(Node(syms.comp_if, [if_leaf, test])) inner = Node(syms.listmaker, [xp, Node(syms.comp_for, inner_args)]) return Node(syms.atom, [Leaf(token.LBRACE, u"["), inner, Leaf(token.RBRACE, u"]")]) def FromImport(package_name, name_leafs): """ Return an import statement in the form: from package import name_leafs""" # XXX: May not handle dotted imports properly (eg, package_name='foo.bar') #assert package_name == '.' or '.' not in package_name, "FromImport has "\ # "not been tested with dotted package names -- use at your own "\ # "peril!" for leaf in name_leafs: # Pull the leaves out of their old tree leaf.remove() children = [Leaf(token.NAME, u'from'), Leaf(token.NAME, package_name, prefix=u" "), Leaf(token.NAME, u'import', prefix=u" "), Node(syms.import_as_names, name_leafs)] imp = Node(syms.import_from, children) return imp ########################################################### ### Determine whether a node represents a given literal ########################################################### def is_tuple(node): """Does the node represent a tuple literal?""" if isinstance(node, Node) and node.children == [LParen(), RParen()]: return True return (isinstance(node, Node) and len(node.children) == 3 and isinstance(node.children[0], Leaf) and isinstance(node.children[1], Node) and isinstance(node.children[2], Leaf) and node.children[0].value == u"(" and node.children[2].value == u")") def is_list(node): """Does the node represent a list literal?""" return (isinstance(node, Node) and len(node.children) > 1 and isinstance(node.children[0], Leaf) and isinstance(node.children[-1], Leaf) and node.children[0].value == u"[" and node.children[-1].value == u"]") ########################################################### ### Misc ########################################################### def parenthesize(node): return Node(syms.atom, [LParen(), node, RParen()]) consuming_calls = set(["sorted", "list", "set", "any", "all", "tuple", "sum", "min", "max"]) def attr_chain(obj, attr): """Follow an attribute chain. If you have a chain of objects where a.foo -> b, b.foo-> c, etc, use this to iterate over all objects in the chain. Iteration is terminated by getattr(x, attr) is None. Args: obj: the starting object attr: the name of the chaining attribute Yields: Each successive object in the chain. """ next = getattr(obj, attr) while next: yield next next = getattr(next, attr) p0 = """for_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > """ p1 = """ power< ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | 'any' | 'all' | (any* trailer< '.' 'join' >) ) trailer< '(' node=any ')' > any* > """ p2 = """ power< 'sorted' trailer< '(' arglist ')' > any* > """ pats_built = False def in_special_context(node): """ Returns true if node is in an environment where all that is required of it is being itterable (ie, it doesn't matter if it returns a list or an itterator). See test_map_nochange in test_fixers.py for some examples and tests. """ global p0, p1, p2, pats_built if not pats_built: p1 = patcomp.compile_pattern(p1) p0 = patcomp.compile_pattern(p0) p2 = patcomp.compile_pattern(p2) pats_built = True patterns = [p0, p1, p2] for pattern, parent in zip(patterns, attr_chain(node, "parent")): results = {} if pattern.match(parent, results) and results["node"] is node: return True return False def is_probably_builtin(node): """ Check that something isn't an attribute or function name etc. """ prev = node.prev_sibling if prev is not None and prev.type == token.DOT: # Attribute lookup. return False parent = node.parent if parent.type in (syms.funcdef, syms.classdef): return False if parent.type == syms.expr_stmt and parent.children[0] is node: # Assignment. return False if parent.type == syms.parameters or \ (parent.type == syms.typedargslist and ( (prev is not None and prev.type == token.COMMA) or parent.children[0] is node )): # The name of an argument. return False return True ########################################################### ### The following functions are to find bindings in a suite ########################################################### def make_suite(node): if node.type == syms.suite: return node node = node.clone() parent, node.parent = node.parent, None suite = Node(syms.suite, [node]) suite.parent = parent return suite def find_root(node): """Find the top level namespace.""" # Scamper up to the top level namespace while node.type != syms.file_input: assert node.parent, "Tree is insane! root found before "\ "file_input node was found." node = node.parent return node def does_tree_import(package, name, node): """ Returns true if name is imported from package at the top level of the tree which node belongs to. To cover the case of an import like 'import foo', use None for the package and 'foo' for the name. """ binding = find_binding(name, find_root(node), package) return bool(binding) def is_import(node): """Returns true if the node is an import statement.""" return node.type in (syms.import_name, syms.import_from) def touch_import(package, name, node): """ Works like `does_tree_import` but adds an import statement if it was not imported. """ def is_import_stmt(node): return node.type == syms.simple_stmt and node.children and \ is_import(node.children[0]) root = find_root(node) if does_tree_import(package, name, root): return # figure out where to insert the new import. First try to find # the first import and then skip to the last one. insert_pos = offset = 0 for idx, node in enumerate(root.children): if not is_import_stmt(node): continue for offset, node2 in enumerate(root.children[idx:]): if not is_import_stmt(node2): break insert_pos = idx + offset break # if there are no imports where we can insert, find the docstring. # if that also fails, we stick to the beginning of the file if insert_pos == 0: for idx, node in enumerate(root.children): if node.type == syms.simple_stmt and node.children and \ node.children[0].type == token.STRING: insert_pos = idx + 1 break if package is None: import_ = Node(syms.import_name, [ Leaf(token.NAME, u'import'), Leaf(token.NAME, name, prefix=u' ') ]) else: import_ = FromImport(package, [Leaf(token.NAME, name, prefix=u' ')]) children = [import_, Newline()] root.insert_child(insert_pos, Node(syms.simple_stmt, children)) _def_syms = set([syms.classdef, syms.funcdef]) def find_binding(name, node, package=None): """ Returns the node which binds variable name, otherwise None. If optional argument package is supplied, only imports will be returned. See test cases for examples.""" for child in node.children: ret = None if child.type == syms.for_stmt: if _find(name, child.children[1]): return child n = find_binding(name, make_suite(child.children[-1]), package) if n: ret = n elif child.type in (syms.if_stmt, syms.while_stmt): n = find_binding(name, make_suite(child.children[-1]), package) if n: ret = n elif child.type == syms.try_stmt: n = find_binding(name, make_suite(child.children[2]), package) if n: ret = n else: for i, kid in enumerate(child.children[3:]): if kid.type == token.COLON and kid.value == ":": # i+3 is the colon, i+4 is the suite n = find_binding(name, make_suite(child.children[i+4]), package) if n: ret = n elif child.type in _def_syms and child.children[1].value == name: ret = child elif _is_import_binding(child, name, package): ret = child elif child.type == syms.simple_stmt: ret = find_binding(name, child, package) elif child.type == syms.expr_stmt: if _find(name, child.children[0]): ret = child if ret: if not package: return ret if is_import(ret): return ret return None _block_syms = set([syms.funcdef, syms.classdef, syms.trailer]) def _find(name, node): nodes = [node] while nodes: node = nodes.pop() if node.type > 256 and node.type not in _block_syms: nodes.extend(node.children) elif node.type == token.NAME and node.value == name: return node return None def _is_import_binding(node, name, package=None): """ Will reuturn node if node will import name, or node will import * from package. None is returned otherwise. See test cases for examples. """ if node.type == syms.import_name and not package: imp = node.children[1] if imp.type == syms.dotted_as_names: for child in imp.children: if child.type == syms.dotted_as_name: if child.children[2].value == name: return node elif child.type == token.NAME and child.value == name: return node elif imp.type == syms.dotted_as_name: last = imp.children[-1] if last.type == token.NAME and last.value == name: return node elif imp.type == token.NAME and imp.value == name: return node elif node.type == syms.import_from: # unicode(...) is used to make life easier here, because # from a.b import parses to ['import', ['a', '.', 'b'], ...] if package and unicode(node.children[1]).strip() != package: return None n = node.children[3] if package and _find(u'as', n): # See test_from_import_as for explanation return None elif n.type == syms.import_as_names and _find(name, n): return node elif n.type == syms.import_as_name: child = n.children[2] if child.type == token.NAME and child.value == name: return node elif n.type == token.NAME and n.value == name: return node elif package and n.type == token.STAR: return node return None fixer_base.py000066600000014351150501042300007223 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Base class for fixers (optional, but recommended).""" # Python imports import logging import itertools # Local imports from .patcomp import PatternCompiler from . import pygram from .fixer_util import does_tree_import class BaseFix(object): """Optional base class for fixers. The subclass name must be FixFooBar where FooBar is the result of removing underscores and capitalizing the words of the fix name. For example, the class name for a fixer named 'has_key' should be FixHasKey. """ PATTERN = None # Most subclasses should override with a string literal pattern = None # Compiled pattern, set by compile_pattern() options = None # Options object passed to initializer filename = None # The filename (set by set_filename) logger = None # A logger (set by set_filename) numbers = itertools.count(1) # For new_name() used_names = set() # A set of all used NAMEs order = "post" # Does the fixer prefer pre- or post-order traversal explicit = False # Is this ignored by refactor.py -f all? run_order = 5 # Fixers will be sorted by run order before execution # Lower numbers will be run first. _accept_type = None # [Advanced and not public] This tells RefactoringTool # which node type to accept when there's not a pattern. # Shortcut for access to Python grammar symbols syms = pygram.python_symbols def __init__(self, options, log): """Initializer. Subclass may override. Args: options: an dict containing the options passed to RefactoringTool that could be used to customize the fixer through the command line. log: a list to append warnings and other messages to. """ self.options = options self.log = log self.compile_pattern() def compile_pattern(self): """Compiles self.PATTERN into self.pattern. Subclass may override if it doesn't want to use self.{pattern,PATTERN} in .match(). """ if self.PATTERN is not None: self.pattern = PatternCompiler().compile_pattern(self.PATTERN) def set_filename(self, filename): """Set the filename, and a logger derived from it. The main refactoring tool should call this. """ self.filename = filename self.logger = logging.getLogger(filename) def match(self, node): """Returns match for a given parse tree node. Should return a true or false object (not necessarily a bool). It may return a non-empty dict of matching sub-nodes as returned by a matching pattern. Subclass may override. """ results = {"node": node} return self.pattern.match(node, results) and results def transform(self, node, results): """Returns the transformation for a given parse tree node. Args: node: the root of the parse tree that matched the fixer. results: a dict mapping symbolic names to part of the match. Returns: None, or a node that is a modified copy of the argument node. The node argument may also be modified in-place to effect the same change. Subclass *must* override. """ raise NotImplementedError() def new_name(self, template=u"xxx_todo_changeme"): """Return a string suitable for use as an identifier The new name is guaranteed not to conflict with other identifiers. """ name = template while name in self.used_names: name = template + unicode(self.numbers.next()) self.used_names.add(name) return name def log_message(self, message): if self.first_log: self.first_log = False self.log.append("### In file %s ###" % self.filename) self.log.append(message) def cannot_convert(self, node, reason=None): """Warn the user that a given chunk of code is not valid Python 3, but that it cannot be converted automatically. First argument is the top-level node for the code in question. Optional second argument is why it can't be converted. """ lineno = node.get_lineno() for_output = node.clone() for_output.prefix = u"" msg = "Line %d: could not convert: %s" self.log_message(msg % (lineno, for_output)) if reason: self.log_message(reason) def warning(self, node, reason): """Used for warning the user about possible uncertainty in the translation. First argument is the top-level node for the code in question. Optional second argument is why it can't be converted. """ lineno = node.get_lineno() self.log_message("Line %d: %s" % (lineno, reason)) def start_tree(self, tree, filename): """Some fixers need to maintain tree-wide state. This method is called once, at the start of tree fix-up. tree - the root node of the tree to be processed. filename - the name of the file the tree came from. """ self.used_names = tree.used_names self.set_filename(filename) self.numbers = itertools.count(1) self.first_log = True def finish_tree(self, tree, filename): """Some fixers need to maintain tree-wide state. This method is called once, at the conclusion of tree fix-up. tree - the root node of the tree to be processed. filename - the name of the file the tree came from. """ pass class ConditionalFix(BaseFix): """ Base class for fixers which not execute if an import is found. """ # This is the name of the import which, if found, will cause the test to be skipped skip_on = None def start_tree(self, *args): super(ConditionalFix, self).start_tree(*args) self._should_skip = None def should_skip(self, node): if self._should_skip is not None: return self._should_skip pkg = self.skip_on.split(".") name = pkg[-1] pkg = ".".join(pkg[:-1]) self._should_skip = does_tree_import(pkg, name, node) return self._should_skip fixes/fix_map.pyc000066600000005733150501042300010024 0ustar00 Lc@sudZddklZddklZddklZlZlZl Z ddk l Z dei fdYZdS( sFixer that changes map(F, ...) into list(map(F, ...)) unless there exists a 'from future_builtins import map' statement in the top-level namespace. As a special case, map(None, X) is changed into list(X). (This is necessary because the semantics are changed in this case -- the new map(None, X) is equivalent to [(x,) for x in X].) We avoid the transformation (except for the special case mentioned above) if the map() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. NOTE: This is still not correct if the original code was depending on map(F, X, Y, ...) to go on until the longest argument is exhausted, substituting None for missing values -- like zip(), it now stops as soon as the shortest argument is exhausted. i(ttoken(t fixer_base(tNametCalltListComptin_special_context(tpython_symbolstFixMapcBseZdZdZdZRS(s map_none=power< 'map' trailer< '(' arglist< 'None' ',' arg=any [','] > ')' > > | map_lambda=power< 'map' trailer< '(' arglist< lambdef< 'lambda' (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any > ',' it=any > ')' > > | power< 'map' trailer< '(' [arglist=any] ')' > > sfuture_builtins.mapcCs|i|odS|iitijoA|i|d|i}d|_tt d|g}nd|jo4t |di|di|di}nd|jo|d i}nd |jog|d }|iti joF|i d it ijo,|i d id jo|i|d dSnt|odS|i}d|_tt d|g}|i|_|S(NsYou should use a for loop hereuulistt map_lambdatxptfptittmap_nonetargtarglistitNonesjcannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence(t should_skiptparentttypetsymst simple_stmttwarningtclonetprefixRRRRtchildrenRtNAMEtvalueRR(tselftnodetresultstnewtargs((s-/usr/lib64/python2.6/lib2to3/fixes/fix_map.pyt transform:s6            (t__name__t __module__tPATTERNtskip_onR (((s-/usr/lib64/python2.6/lib2to3/fixes/fix_map.pyRsN(t__doc__tpgen2RtRt fixer_utilRRRRtpygramRRtConditionalFixR(((s-/usr/lib64/python2.6/lib2to3/fixes/fix_map.pyts "fixes/fix_basestring.pyc000066600000001404150501042300011377 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sFixer for basestring -> str.i(t fixer_base(tNamet FixBasestringcBseZdZdZRS(s 'basestring'cCstdd|iS(Nustrtprefix(RR(tselftnodetresults((s4/usr/lib64/python2.6/lib2to3/fixes/fix_basestring.pyt transform s(t__name__t __module__tPATTERNR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_basestring.pyRsN(t__doc__tRt fixer_utilRtBaseFixR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_basestring.pytsfixes/fix_next.pyo000066600000006750150501042300010241 0ustar00 Lc@sdZddklZddklZddklZddkl Z l Z l Z dZ dei fdYZd Zd Zd Zd S( s.Fixer for it.next() -> next(it), per PEP 3114.i(ttoken(tpython_symbols(t fixer_base(tNametCallt find_bindings;Calls to builtin next() possibly shadowed by global bindingtFixNextcBs&eZdZdZdZdZRS(s power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > > | power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > > | classdef< 'class' any+ ':' suite< any* funcdef< 'def' name='next' parameters< '(' NAME ')' > any+ > any* > > | global=global_stmt< 'global' any* 'next' any* > tprecCsYtt|i||td|}|o|i|tt|_n t|_dS(Nunext( tsuperRt start_treeRtwarningt bind_warningtTruet shadowed_nexttFalse(tselfttreetfilenametn((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyR #s  c Cs|id}|id}|id}|o|io |itdd|iqg}|D]}||iqi~}d|d_|ittdd|i|n|o&tdd|i}|i|n|ot|o`|d }d ig} |D]}| t |q~ i d jo|i |t ndS|itdn+d |jo|i |t t |_ndS( Ntbasetattrtnameu__next__tprefixuiunexttheadtu __builtin__tglobal(tgetR treplaceRRtcloneRtis_assign_targettjointstrtstripR R R ( RtnodetresultsRRRt_[1]RRt_[2]((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyt transform-s,  ' )  = (t__name__t __module__tPATTERNtorderR R%(((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyRs cCsct|}|djotSx>|iD]3}|itijotSt||otSq(WtS(N( t find_assigntNoneRtchildrenttypeRtEQUALt is_subtreeR (R!tassigntchild((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyRPs    cCsM|itijo|S|itijp|idjodSt|iS(N(R-tsymst expr_stmtt simple_stmttparentR+R*(R!((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyR*\s #cs/|jotStfd|iDS(Nc3s"x|]}t|VqWdS(N(R/(t.0tc(R!(s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pys fs (R tanyR,(trootR!((R!s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyR/cs N(t__doc__tpgen2RtpygramRR2RRt fixer_utilRRRR tBaseFixRRR*R/(((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyts? fixes/fix_callable.py000066600000002130150501042300010627 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for callable(). This converts callable(obj) into isinstance(obj, collections.Callable), adding a collections import if needed.""" # Local imports from lib2to3 import fixer_base from lib2to3.fixer_util import Call, Name, String, Attr, touch_import class FixCallable(fixer_base.BaseFix): # Ignore callable(*args) or use of keywords. # Either could be a hint that the builtin callable() is not being used. PATTERN = """ power< 'callable' trailer< lpar='(' ( not(arglist | argument) any ','> ) rpar=')' > after=any* > """ def transform(self, node, results): func = results['func'] touch_import(None, u'collections', node=node) args = [func.clone(), String(u', ')] args.extend(Attr(Name(u'collections'), Name(u'Callable'))) return Call(Name(u'isinstance'), args, prefix=node.prefix) fixes/fix_nonzero.pyo000066600000002051150501042300010743 0ustar00 Lc@sIdZddklZddklZlZdeifdYZdS(s*Fixer for __nonzero__ -> __bool__ methods.i(t fixer_base(tNametsymst FixNonzerocBseZdZdZRS(s classdef< 'class' any+ ':' suite< any* funcdef< 'def' name='__nonzero__' parameters< '(' NAME ')' > any+ > any* > > cCs0|d}tdd|i}|i|dS(Ntnameu__bool__tprefix(RRtreplace(tselftnodetresultsRtnew((s1/usr/lib64/python2.6/lib2to3/fixes/fix_nonzero.pyt transforms (t__name__t __module__tPATTERNR (((s1/usr/lib64/python2.6/lib2to3/fixes/fix_nonzero.pyRsN(t__doc__tRt fixer_utilRRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_nonzero.pytsfixes/fix_raise.pyo000066600000004504150501042300010361 0ustar00 Lc@s{dZddklZddklZddklZddklZlZl Z l Z l Z dei fdYZ dS( s-Fixer for 'raise E, V, T' raise -> raise raise E -> raise E raise E, V -> raise E(V) raise E, V, T -> raise E(V).with_traceback(T) raise (((E, E'), E''), E'''), V -> raise E(V) raise "foo", V, T -> warns about string exceptions CAVEATS: 1) "raise E, V" will be incorrectly translated if V is an exception instance. The correct Python 3 idiom is raise E from V but since we can't detect instance-hood by syntax alone and since any client code would have to be changed as well, we don't automate this. i(tpytree(ttoken(t fixer_base(tNametCalltAttrtArgListtis_tupletFixRaisecBseZdZdZRS(sB raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] > c Cs|i}|di}|itijo|i|ddSt|o<x,t|o|ididi}qQWd|_nd|jo2t i |i t d|g}|i|_|S|di}t|o5g}|idd!D]}||iq~} nd |_|g} d |jo|d i} d | _t || } t| t d t| gg} t i |it dg| }|i|_|St i |i t dt || gd |iSdS( Ntexcs+Python 3 does not support string exceptionsiit tvaluraiseiuttbuwith_tracebacktprefix(tsymstclonettypeRtSTRINGtcannot_convertRtchildrenR RtNodet raise_stmtRRRRt simple_stmt( tselftnodetresultsRR tnewR t_[1]tctargsR tetwith_tb((s//usr/lib64/python2.6/lib2to3/fixes/fix_raise.pyt transform$s<     !  5    %"  (t__name__t __module__tPATTERNR (((s//usr/lib64/python2.6/lib2to3/fixes/fix_raise.pyRsN(t__doc__tRtpgen2RRt fixer_utilRRRRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_raise.pyts (fixes/fix_idioms.pyc000066600000010724150501042300010527 0ustar00 Lc@smdZddklZddklZlZlZlZlZl Z dZ dZ dei fdYZ dS( sAdjust some old Python 2 idioms to their modern counterparts. * Change some type comparisons to isinstance() calls: type(x) == T -> isinstance(x, T) type(x) is T -> isinstance(x, T) type(x) != T -> not isinstance(x, T) type(x) is not T -> not isinstance(x, T) * Change "while 1:" into "while True:". * Change both v = list(EXPR) v.sort() foo(v) and the more general v = EXPR v.sort() foo(v) into v = sorted(EXPR) foo(v) i(t fixer_base(tCalltCommatNametNodet BlankLinetsymss0(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)s(power< 'type' trailer< '(' x=any ')' > >t FixIdiomscBsQeZeZdeeeefZdZdZdZ dZ dZ RS(s isinstance=comparison< %s %s T=any > | isinstance=comparison< T=any %s %s > | while_stmt< 'while' while='1' ':' any+ > | sorted=any< any* simple_stmt< expr_stmt< id1=any '=' power< list='list' trailer< '(' (not arglist) any ')' > > > '\n' > sort= simple_stmt< power< id2=any trailer< '.' 'sort' > trailer< '(' ')' > > '\n' > next=any* > | sorted=any< any* simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' > sort= simple_stmt< power< id2=any trailer< '.' 'sort' > trailer< '(' ')' > > '\n' > next=any* > cCsOtt|i|}|o,d|jo|d|djo|SdS|S(Ntsortedtid1tid2(tsuperRtmatchtNone(tselftnodetr((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyR Ps cCsjd|jo|i||Sd|jo|i||Sd|jo|i||StddS(Nt isinstancetwhileRs Invalid match(ttransform_isinstancettransform_whilettransform_sortt RuntimeError(RRtresults((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyt transform[s   cCs|di}|di}d|_d|_ttd|t|g}d|jo+d|_ttitd|g}n|i|_|S(NtxtTuu u isinstancetnunot(tclonetprefixRRRRRtnot_test(RRRRRttest((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyRes  !  " cCs*|d}|itdd|idS(NRuTrueR(treplaceRR(RRRtone((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyRqs c Cs|d}|d}|id}|id}|o |itdd|inU|oA|i}d|_|ittd|gd|in td|i|i}d |jo|o:|id d |d if} d i | |d _q|i pt |i djpt t} |i i| |i | jpt |id d | _ndS( NtsorttnexttlisttexprusortedRusshould not have reached hereu i(tgetR RRRRRtremovet rpartitiontjointparenttAssertionErrort next_siblingR Rt append_child( RRRt sort_stmtt next_stmtt list_callt simple_exprtnewtbtwnt prefix_linestend_line((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyRus0           ( t__name__t __module__tTruetexplicittTYPEtCMPtPATTERNR RRRR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyR%s' N(t__doc__tRt fixer_utilRRRRRRR;R:tBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyts .fixes/fix_urllib.pyc000066600000014503150501042300010533 0ustar00 Lc@sgdZddklZlZddklZddklZlZl Z l Z l Z hdddd d d d d dgfddddddddddddddddgfdd gfgd!6dd d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7d8gfdd9d:gfgd;6Z e d;i e d!dd<Zd=efd>YZd?S(@sFix changes imports of urllib which are now incompatible. This is rather similar to fix_imports, but because of the more complex nature of the fixing for urllib, it has its own fixer. i(t alternatest FixImportsi(t fixer_base(tNametCommat FromImporttNewlinet attr_chainsurllib.requestt URLOpenertFancyURLOpenert urlretrievet _urlopenerturlopent urlcleanupt pathname2urlt url2pathnames urllib.parsetquotet quote_plustunquotet unquote_plust urlencodet splitattrt splithostt splitnportt splitpasswdt splitportt splitquerytsplittagt splittypet splitusert splitvalues urllib.errortContentTooShortErrorturllibtinstall_openert build_openertRequesttOpenerDirectort BaseHandlertHTTPDefaultErrorHandlertHTTPRedirectHandlertHTTPCookieProcessort ProxyHandlertHTTPPasswordMgrtHTTPPasswordMgrWithDefaultRealmtAbstractBasicAuthHandlertHTTPBasicAuthHandlertProxyBasicAuthHandlertAbstractDigestAuthHandlertHTTPDigestAuthHandlertProxyDigestAuthHandlert HTTPHandlert HTTPSHandlert FileHandlert FTPHandlertCacheFTPHandlertUnknownHandlertURLErrort HTTPErrorturllib2ccst}xtiD]w\}}xh|D]`}|\}}t|}d||fVd|||fVd|Vd|Vd||fVq)WqWdS(Nsimport_name< 'import' (module=%r | dotted_as_names< any* module=%r any* >) > simport_from< 'from' mod_member=%r 'import' ( member=%s | import_as_name< member=%s 'as' any > | import_as_names< members=any* >) > sIimport_from< 'from' module_star=%r 'import' star='*' > stimport_name< 'import' dotted_as_name< module_as=%r 'as' any > > sKpower< bare_with_attr=%r trailer< '.' member=%s > any* > (tsettMAPPINGtitemsR(tbaret old_moduletchangestchanget new_moduletmembers((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyt build_pattern0s      t FixUrllibcBs5eZdZdZdZdZdZRS(cCsditS(Nt|(tjoinRD(tself((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyRDIscCs|id}|i}g}x?t|id D],}|it|dd|tgq0W|itt|iddd||i|dS(sTransform for the basic import case. Replaces the old import name with a comma separated list of its replacements. tmoduleiitprefixN( tgetRJR<tvaluetextendRRtappendtreplace(RHtnodetresultst import_modtpreftnamestname((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyttransform_importLs *(c Cs|id}|i}|id}|ot|to|d}nd }x8t|iD])}|i|djo|d}Pq`q`W|o|it|d|q|i |dng}h} |id} x| D]}|i}|djo}xzt|iD]g}||djoP|d| jo| |di |qx|g| |d<|i |dqqWqqWg} x|D]} | | } g}x4| d D](}|i t|d|t gqW|i t| d d|| i t | |qW| oSg}x(| d D]}|i |tgq-W|i | d |i|n|i |d d S( sTransform for imports of specific module elements. Replaces the module to be imported from with the appropriate new module. t mod_membertmemberiiRJs!This is an invalid module elementRCt,isAll module elements are invalidN(RKRJt isinstancetlisttNoneR<RLRORtcannot_convertRNRMRRR(RHRPRQRWRSRXtnew_nameRAtmodulestmod_dictRCt new_nodesRIteltsRTtelttnodestnew_node((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyttransform_member\s`       !  & cCs|id}|id}d}t|to|d}nx8t|iD])}|i|djo|d}PqPqPW|o |it|d|in|i |ddS(s.Transform for calls to module members in code.tbare_with_attrRXiiRJs!This is an invalid module elementN( RKR\RZR[R<RLRORRJR](RHRPRQt module_dotRXR^RA((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyt transform_dots  cCs|ido|i||n|ido|i||nm|ido|i||nI|ido|i|dn%|ido|i|dndS(NRIRWRgt module_starsCannot handle star imports.t module_ass#This module is now multiple modules(RKRVRfRiR](RHRPRQ((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyt transforms(t__name__t __module__RDRVRfRiRl(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyREGs    < N(t__doc__t fix_importsRRtRt fixer_utilRRRRRR<RNRDRE(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pytsD(           fixes/fix_input.py000066600000001255150501042300010236 0ustar00"""Fixer that changes input(...) into eval(input(...)).""" # Author: Andre Roberge # Local imports from .. import fixer_base from ..fixer_util import Call, Name from .. import patcomp context = patcomp.compile_pattern("power< 'eval' trailer< '(' any ')' > >") class FixInput(fixer_base.BaseFix): PATTERN = """ power< 'input' args=trailer< '(' [any] ')' > > """ def transform(self, node, results): # If we're already wrapped in a eval() call, we're done. if context.match(node.parent.parent): return new = node.clone() new.prefix = u"" return Call(Name(u"eval"), [new], prefix=node.prefix) fixes/fix_filter.pyo000066600000004303150501042300010540 0ustar00 Lc@sedZddklZddklZddklZlZlZl Z dei fdYZ dS(sFixer that changes filter(F, X) into list(filter(F, X)). We avoid the transformation if the filter() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. NOTE: This is still not correct if the original code was depending on filter(F, X) to return a string if X is a string and a tuple if X is a tuple. That would require type inference, which we don't do. Let Python 2.6 figure it out. i(ttoken(t fixer_base(tNametCalltListComptin_special_contextt FixFiltercBseZdZdZdZRS(s filter_lambda=power< 'filter' trailer< '(' arglist< lambdef< 'lambda' (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any > ',' it=any > ')' > > | power< 'filter' trailer< '(' arglist< none='None' ',' seq=any > ')' > > | power< 'filter' args=trailer< '(' [any] ')' > > sfuture_builtins.filtercCs|i|odSd|joUt|idi|idi|idi|idi}nd|jo5ttdtd|ditd}n@t|odS|i}d|_ttd |g}|i|_|S( Nt filter_lambdatfptittxptnoneu_ftsequulist( t should_skipRtgettcloneRRtNonetprefixR(tselftnodetresultstnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_filter.pyt transform4s&         (t__name__t __module__tPATTERNtskip_onR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_filter.pyRsN( t__doc__tpgen2RtRt fixer_utilRRRRtConditionalFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_filter.pyts"fixes/fix_tuple_params.pyc000066600000012557150501042300011745 0ustar00 Lc@sdZddklZddklZddklZddklZlZl Z l Z l Z l Z dZ deifdYZd Zd Zgd d Zd Zd S(s:Fixer for function definitions with tuple parameters. def func(((a, b), c), d): ... -> def func(x, d): ((a, b), c) = x ... It will also support lambdas: lambda (x, y): x + y -> lambda t: t[0] + t[1] # The parens are a syntax error in Python 3 lambda (x): x + y -> lambda x: x + y i(tpytree(ttoken(t fixer_base(tAssigntNametNewlinetNumbert SubscripttsymscCs*t|tio|iditijS(Ni(t isinstanceRtNodetchildrenttypeRtSTRING(tstmt((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyt is_docstringstFixTupleParamscBs eZdZdZdZRS(s funcdef< 'def' any parameters< '(' args=any ')' > ['->' any] ':' suite=any+ > | lambda= lambdef< 'lambda' args=vfpdef< '(' inner=any ')' > ':' body=any > c s@d|joi||Sg|d}|d}|diditijo'd}|didi}tn"d}d}titidt fd }|it i jo||ne|it i joQxNt |iD]9\}} | it i jo|| d |djqqWnpdSxD]} |d| _qNW|} |djod d_n4t|di|o|d_|d} nxD]} |d| _qW|di| | +x=t| d| tdD]}||di|_qW|didS( Ntlambdatsuitetargsiiiu; ucsti}|i}d|_t||i}|o d|_n|i|itit i |igdS(Nuu ( Rtnew_nametclonetprefixRtreplacetappendRR Rt simple_stmt(t tuple_argt add_prefixtntargR(t new_linestendtself(s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyt handle_tuple?s    Ru (ttransform_lambdaR R RtINDENTtvalueRRtLeaftFalseRttfpdeft typedargslistt enumeratetparentRRtrangetlentchanged( R tnodetresultsRRtstarttindentR!tiRtlinetafter((RR Rs6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyt transform*sN      "  "cCs[|d}|d}t|d}|itijo'|i}d|_|i|dSt|}t|}|i t |}t |dd} |i| ix|i D]} | itijo}| i |jomg} || i D]} | | iq~ } titi| ig| }| i|_| i|qqWdS(NRtbodytinneru R(t simplify_argsR RtNAMERRRt find_paramst map_to_indexRt tuple_nameRt post_orderR$RR Rtpower(R R.R/RR6R7tparamstto_indexttup_namet new_paramRt_[1]tct subscriptstnew((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyR"js*        #.  (t__name__t __module__tPATTERNR5R"(((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyRs  @cCsu|ititifjo|S|itijo-x%|itijo|id}q7W|Std|dS(NisReceived unexpected node %s(R RtvfplistRR9tvfpdefR t RuntimeError(R.((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyR8scCs|itijot|idS|itijo|iSg}|iD]*}|itijo|t|qNqN~S(Ni( R RRKR:R RR9R$tCOMMA(R.RCRD((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyR:s cCs|djo h}nxjt|D]\\}}ttt|g}t|tot||d|q$||||s. h  fixes/fix_imports2.pyc000066600000001172150501042300011017 0ustar00 Lc@sGdZddklZhdd6dd6ZdeifdYZdS( sTFix incompatible imports and module references that must be fixed after fix_imports.i(t fix_importstdbmtwhichdbtanydbmt FixImports2cBseZdZeZRS(i(t__name__t __module__t run_ordertMAPPINGtmapping(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_imports2.pyR sN(t__doc__tRRt FixImportsR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_imports2.pyts  fixes/fix_numliterals.py000066600000001405150501042300011433 0ustar00"""Fixer that turns 1L into 1, 0755 into 0o755. """ # Copyright 2007 Georg Brandl. # Licensed to PSF under a Contributor Agreement. # Local imports from ..pgen2 import token from .. import fixer_base from ..fixer_util import Number class FixNumliterals(fixer_base.BaseFix): # This is so simple that we don't need the pattern compiler. _accept_type = token.NUMBER def match(self, node): # Override return (node.value.startswith(u"0") or node.value[-1] in u"Ll") def transform(self, node, results): val = node.value if val[-1] in u'Ll': val = val[:-1] elif val.startswith(u'0') and val.isdigit() and len(set(val)) > 1: val = u"0o" + val[1:] return Number(val, prefix=node.prefix) fixes/fix_has_key.py000066600000006202150501042300010517 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for has_key(). Calls to .has_key() methods are expressed in terms of the 'in' operator: d.has_key(k) -> k in d CAVEATS: 1) While the primary target of this fixer is dict.has_key(), the fixer will change any has_key() method call, regardless of its class. 2) Cases like this will not be converted: m = d.has_key if m(k): ... Only *calls* to has_key() are converted. While it is possible to convert the above to something like m = d.__contains__ if m(k): ... this is currently not done. """ # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, parenthesize class FixHasKey(fixer_base.BaseFix): PATTERN = """ anchor=power< before=any+ trailer< '.' 'has_key' > trailer< '(' ( not(arglist | argument) arg=any ','> ) ')' > after=any* > | negation=not_test< 'not' anchor=power< before=any+ trailer< '.' 'has_key' > trailer< '(' ( not(arglist | argument) arg=any ','> ) ')' > > > """ def transform(self, node, results): assert results syms = self.syms if (node.parent.type == syms.not_test and self.pattern.match(node.parent)): # Don't transform a node matching the first alternative of the # pattern when its parent matches the second alternative return None negation = results.get("negation") anchor = results["anchor"] prefix = node.prefix before = [n.clone() for n in results["before"]] arg = results["arg"].clone() after = results.get("after") if after: after = [n.clone() for n in after] if arg.type in (syms.comparison, syms.not_test, syms.and_test, syms.or_test, syms.test, syms.lambdef, syms.argument): arg = parenthesize(arg) if len(before) == 1: before = before[0] else: before = pytree.Node(syms.power, before) before.prefix = u" " n_op = Name(u"in", prefix=u" ") if negation: n_not = Name(u"not", prefix=u" ") n_op = pytree.Node(syms.comp_op, (n_not, n_op)) new = pytree.Node(syms.comparison, (arg, n_op, before)) if after: new = parenthesize(new) new = pytree.Node(syms.power, (new,) + tuple(after)) if node.parent.type in (syms.comparison, syms.expr, syms.xor_expr, syms.and_expr, syms.shift_expr, syms.arith_expr, syms.term, syms.factor, syms.power): new = parenthesize(new) new.prefix = prefix return new fixes/fix_execfile.pyo000066600000003731150501042300011043 0ustar00 Lc@sydZddklZddklZlZlZlZlZl Z l Z l Z l Z l Z deifdYZdS(soFixer for execfile. This converts usages of the execfile function into calls to the built-in exec() function. i(t fixer_base( tCommatNametCalltLParentRParentDottNodetArgListtStringtsymst FixExecfilecBseZdZdZRS(s power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > | power< 'execfile' trailer< '(' filename=any ')' > > cCs|d}|id}|id}|ididi}t|igd|}ttitd|g}ttit tdgttit t gg} |g| } |i} d| _ t d d} | t| t| g} ttd | d }|g}|dj o |it|ign|dj o |it|ignttd |d |i S(Ntfilenametglobalstlocalsitrparenuopenureadu u'exec'ucompileuuexectprefix(tgettchildrentcloneRRR tpowerRttrailerRRRRR RRtNonetextend(tselftnodetresultsR R Rtexecfile_parent open_argst open_calltreadt open_exprt filename_argtexec_strt compile_argst compile_calltargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_execfile.pyt transforms( !        (t__name__t __module__tPATTERNR%(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_execfile.pyR sN(t__doc__tRt fixer_utilRRRRRRRRR R tBaseFixR (((s2/usr/lib64/python2.6/lib2to3/fixes/fix_execfile.pytsFfixes/fix_raw_input.pyo000066600000001623150501042300011265 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s2Fixer that changes raw_input(...) into input(...).i(t fixer_base(tNamet FixRawInputcBseZdZdZRS(sU power< name='raw_input' trailer< '(' [any] ')' > any* > cCs*|d}|itdd|idS(Ntnameuinputtprefix(treplaceRR(tselftnodetresultsR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_raw_input.pyt transforms (t__name__t __module__tPATTERNR (((s3/usr/lib64/python2.6/lib2to3/fixes/fix_raw_input.pyRsN(t__doc__tRt fixer_utilRtBaseFixR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_raw_input.pytsfixes/fix_xrange.pyc000066600000005774150501042300010540 0ustar00 Lc@s_dZddklZddklZlZlZddklZdeifdYZ dS(s/Fixer that changes xrange(...) into range(...).i(t fixer_base(tNametCalltconsuming_calls(tpatcompt FixXrangecBsneZdZdZdZdZdZdZdZe i eZ dZ e i e Z dZRS( s power< (name='range'|name='xrange') trailer< '(' args=any ')' > rest=any* > cCs)tt|i||t|_dS(N(tsuperRt start_treetsetttransformed_xranges(tselfttreetfilename((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyRscCs d|_dS(N(tNoneR (R R R ((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyt finish_treescCsb|d}|idjo|i||S|idjo|i||Stt|dS(Ntnameuxrangeurange(tvaluettransform_xrangettransform_ranget ValueErrortrepr(R tnodetresultsR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyt transforms  cCs@|d}|itdd|i|iit|dS(NRurangetprefix(treplaceRRR taddtid(R RRR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR$s cCst||ijo{|i| ojttd|dig}ttd|gd|i}x|dD]}|i|quW|SdS(NurangetargsulistRtrest(RR tin_special_contextRRtcloneRt append_child(R RRt range_callt list_calltn((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR*s"  s3power< func=NAME trailer< '(' node=any ')' > any* >sfor_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > | comparison< any 'in' node=any any*> cCs|idjotSh}|iidj o?|ii|ii|o#|d|jo|ditjS|ii|i|o|d|jS(NRtfunc(tparentR tFalsetp1tmatchRRtp2(R RR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR?s(t__name__t __module__tPATTERNRRRRRtP1Rtcompile_patternR'tP2R)R(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR s    N( t__doc__tRt fixer_utilRRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pytsfixes/fix_renames.pyc000066600000004623150501042300010676 0ustar00 Lc@sudZddklZddklZlZhhdd6d6ZhZdZdZ d ei fd YZ d S( s?Fix incompatible renames Fixes: * sys.maxint -> sys.maxsize i(t fixer_base(tNamet attr_chaintmaxsizetmaxinttsyscCsdditt|dS(Nt(t|t)(tjointmaptrepr(tmembers((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyt alternatessccsoxhtiD]Z\}}xK|iD]=\}}|t||f) > s^ power< module_name=%r trailer< '.' attr_name=%r > any* > (tMAPPINGtitemstLOOKUP(tmoduletreplacetold_attrtnew_attr((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyt build_patterns   t FixRenamescBs2eZdieZdZdZdZRS(RtprecsYtt|i|}|o0tfdt|dDotS|StS(Nc3sx|]}|VqWdS(N((t.0tobj(tmatch(s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pys 4s tparent(tsuperRRtanyRtFalse(tselftnodetresults((Rs1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyR0s &cCsl|id}|id}|oC|o<tt|i|if}|it|d|indS(Nt module_namet attr_nametprefix(tgettunicodeRtvalueRRR$(RR R!tmod_nameR#R((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyt transform=s (t__name__t __module__R RtPATTERNtorderRR)(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyR*s N( t__doc__tRt fixer_utilRRRRR RtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyts  fixes/fix_basestring.pyo000066600000001404150501042300011413 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sFixer for basestring -> str.i(t fixer_base(tNamet FixBasestringcBseZdZdZRS(s 'basestring'cCstdd|iS(Nustrtprefix(RR(tselftnodetresults((s4/usr/lib64/python2.6/lib2to3/fixes/fix_basestring.pyt transform s(t__name__t __module__tPATTERNR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_basestring.pyRsN(t__doc__tRt fixer_utilRtBaseFixR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_basestring.pytsfixes/fix_types.py000066600000003366150501042300010250 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for removing uses of the types module. These work for only the known names in the types module. The forms above can include types. or not. ie, It is assumed the module is imported either as: import types from types import ... # either * or specific types The import statements are not modified. There should be another fixer that handles at least the following constants: type([]) -> list type(()) -> tuple type('') -> str """ # Local imports from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name _TYPE_MAPPING = { 'BooleanType' : 'bool', 'BufferType' : 'memoryview', 'ClassType' : 'type', 'ComplexType' : 'complex', 'DictType': 'dict', 'DictionaryType' : 'dict', 'EllipsisType' : 'type(Ellipsis)', #'FileType' : 'io.IOBase', 'FloatType': 'float', 'IntType': 'int', 'ListType': 'list', 'LongType': 'int', 'ObjectType' : 'object', 'NoneType': 'type(None)', 'NotImplementedType' : 'type(NotImplemented)', 'SliceType' : 'slice', 'StringType': 'bytes', # XXX ? 'StringTypes' : 'str', # XXX ? 'TupleType': 'tuple', 'TypeType' : 'type', 'UnicodeType': 'str', 'XRangeType' : 'range', } _pats = ["power< 'types' trailer< '.' name='%s' > >" % t for t in _TYPE_MAPPING] class FixTypes(fixer_base.BaseFix): PATTERN = '|'.join(_pats) def transform(self, node, results): new_value = unicode(_TYPE_MAPPING.get(results["name"].value)) if new_value: return Name(new_value, prefix=node.prefix) return None fixes/fix_exitfunc.py000066600000004625150501042300010730 0ustar00""" Convert use of sys.exitfunc to use the atexit module. """ # Author: Benjamin Peterson from lib2to3 import pytree, fixer_base from lib2to3.fixer_util import Name, Attr, Call, Comma, Newline, syms class FixExitfunc(fixer_base.BaseFix): PATTERN = """ ( sys_import=import_name<'import' ('sys' | dotted_as_names< (any ',')* 'sys' (',' any)* > ) > | expr_stmt< power< 'sys' trailer< '.' 'exitfunc' > > '=' func=any > ) """ def __init__(self, *args): super(FixExitfunc, self).__init__(*args) def start_tree(self, tree, filename): super(FixExitfunc, self).start_tree(tree, filename) self.sys_import = None def transform(self, node, results): # First, find a the sys import. We'll just hope it's global scope. if "sys_import" in results: if self.sys_import is None: self.sys_import = results["sys_import"] return func = results["func"].clone() func.prefix = u"" register = pytree.Node(syms.power, Attr(Name(u"atexit"), Name(u"register")) ) call = Call(register, [func], node.prefix) node.replace(call) if self.sys_import is None: # That's interesting. self.warning(node, "Can't find sys import; Please add an atexit " "import at the top of your file.") return # Now add an atexit import after the sys import. names = self.sys_import.children[1] if names.type == syms.dotted_as_names: names.append_child(Comma()) names.append_child(Name(u"atexit", u" ")) else: containing_stmt = self.sys_import.parent position = containing_stmt.children.index(self.sys_import) stmt_container = containing_stmt.parent new_import = pytree.Node(syms.import_name, [Name(u"import"), Name(u"atexit", u" ")] ) new = pytree.Node(syms.simple_stmt, [new_import]) containing_stmt.insert_child(position + 1, Newline()) containing_stmt.insert_child(position + 2, new) fixes/fix_dict.pyo000066600000007121150501042300010177 0ustar00 Lc @sdZddklZddklZddklZddklZddklZl Z l Z l Z l Z l Z ddklZeiedgBZd eifd YZd S( sjFixer for dict methods. d.keys() -> list(d.keys()) d.items() -> list(d.items()) d.values() -> list(d.values()) d.iterkeys() -> iter(d.keys()) d.iteritems() -> iter(d.items()) d.itervalues() -> iter(d.values()) d.viewkeys() -> d.keys() d.viewitems() -> d.items() d.viewvalues() -> d.values() Except in certain very specific contexts: the iter() can be dropped when the context is list(), sorted(), iter() or for...in; the list() can be dropped when the context is list() or sorted() (but not iter() or for...in!). Special contexts that apply to both: list(), sorted(), tuple() set(), any(), all(), sum(). Note: iter(d.keys()) could be written as iter(d) but since the original d.iterkeys() was also redundant we don't fix this. And there are (rare) contexts where it makes a difference (e.g. when passing it as an argument to a function that introspects the argument). i(tpytree(tpatcomp(ttoken(t fixer_base(tNametCalltLParentRParentArgListtDot(t fixer_utiltitertFixDictcBsJeZdZdZdZeieZdZeieZ dZ RS(s power< head=any+ trailer< '.' method=('keys'|'items'|'values'| 'iterkeys'|'iteritems'|'itervalues'| 'viewkeys'|'viewitems'|'viewvalues') > parens=trailer< '(' ')' > tail=any* > c Cs|d}|dd}|d}|i}|i}|id}|id} |p| o|d}ng} |D]} | | iqy~ }g} |D]} | | iq~ }| o|i||} |ti|itt |d|i g|d ig}ti|i |}| p| p3d |_ t t |odnd |g}n|o ti|i |g|}n|i |_ |S( Ntheadtmethodittailuiteruviewitprefixtparensuulist( tsymstvaluet startswithtclonetin_special_contextRtNodettrailerR RRtpowerR(tselftnodetresultsR RRRt method_nametisitertisviewt_[1]tnt_[2]tspecialtargstnew((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyt transform5s2    ''  *  s3power< func=NAME trailer< '(' node=any ')' > any* >smfor_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > cCs|idjotSh}|iidj o^|ii|ii|oB|d|jo1|o|ditjS|ditijSn|ptS|i i|i|o|d|jS(NRtfunc( tparenttNonetFalsetp1tmatchRt iter_exemptR tconsuming_callstp2(RRRR((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyRYs( t__name__t __module__tPATTERNR&tP1Rtcompile_patternR+tP2R/R(((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyR *s  N(t__doc__tRRtpgen2RRR RRRRRR R.tsetR-tBaseFixR (((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyts.fixes/fix_nonzero.py000066600000001075150501042300010571 0ustar00"""Fixer for __nonzero__ -> __bool__ methods.""" # Author: Collin Winter # Local imports from .. import fixer_base from ..fixer_util import Name, syms class FixNonzero(fixer_base.BaseFix): PATTERN = """ classdef< 'class' any+ ':' suite< any* funcdef< 'def' name='__nonzero__' parameters< '(' NAME ')' > any+ > any* > > """ def transform(self, node, results): name = results["name"] new = Name(u"__bool__", prefix=name.prefix) name.replace(new) fixes/fix_callable.pyc000066600000002646150501042300011006 0ustar00 Lc@s[dZddklZddklZlZlZlZlZdei fdYZ dS(sFixer for callable(). This converts callable(obj) into isinstance(obj, collections.Callable), adding a collections import if needed.i(t fixer_base(tCalltNametStringtAttrt touch_importt FixCallablecBseZdZdZRS(s power< 'callable' trailer< lpar='(' ( not(arglist | argument) any ','> ) rpar=')' > after=any* > cCss|d}tddd||itdg}|ittdtdttd|d|iS(Ntfuncu collectionstnodeu, uCallableu isinstancetprefix( RtNonetcloneRtextendRRRR (tselfRtresultsRtargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_callable.pyt transforms  "(t__name__t __module__tPATTERNR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_callable.pyR s N( t__doc__tlib2to3Rtlib2to3.fixer_utilRRRRRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_callable.pyts(fixes/fix_renames.pyo000066600000004623150501042300010712 0ustar00 Lc@sudZddklZddklZlZhhdd6d6ZhZdZdZ d ei fd YZ d S( s?Fix incompatible renames Fixes: * sys.maxint -> sys.maxsize i(t fixer_base(tNamet attr_chaintmaxsizetmaxinttsyscCsdditt|dS(Nt(t|t)(tjointmaptrepr(tmembers((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyt alternatessccsoxhtiD]Z\}}xK|iD]=\}}|t||f) > s^ power< module_name=%r trailer< '.' attr_name=%r > any* > (tMAPPINGtitemstLOOKUP(tmoduletreplacetold_attrtnew_attr((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyt build_patterns   t FixRenamescBs2eZdieZdZdZdZRS(RtprecsYtt|i|}|o0tfdt|dDotS|StS(Nc3sx|]}|VqWdS(N((t.0tobj(tmatch(s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pys 4s tparent(tsuperRRtanyRtFalse(tselftnodetresults((Rs1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyR0s &cCsl|id}|id}|oC|o<tt|i|if}|it|d|indS(Nt module_namet attr_nametprefix(tgettunicodeRtvalueRRR$(RR R!tmod_nameR#R((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyt transform=s (t__name__t __module__R RtPATTERNtorderRR)(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyR*s N( t__doc__tRt fixer_utilRRRRR RtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_renames.pyts  fixes/fix_apply.pyo000066600000003437150501042300010407 0ustar00 Lc@sodZddklZddklZddklZddklZlZl Z dei fdYZ dS( sIFixer for apply(). This converts apply(func, v, k) into (func)(*v, **k).i(tpytree(ttoken(t fixer_base(tCalltCommat parenthesizetFixApplycBseZdZdZRS(s. power< 'apply' trailer< '(' arglist< (not argument ')' > > c CsR|i}|d}|d}|id}|i}|i}|iti|ifjo=|i|ijp|i diti jot |}nd|_|i}d|_|dj o|i}d|_nt itid|g}|dj o9|itt iti d|gd|d_nt||d |S( Ntfunctargstkwdsitu*u**u tprefix(tsymstgetR tclonettypeRtNAMEtatomtpowertchildrent DOUBLESTARRtNoneRtLeaftSTARtextendRR( tselftnodetresultsR RRR R t l_newargs((s//usr/lib64/python2.6/lib2to3/fixes/fix_apply.pyt transforms.              (t__name__t __module__tPATTERNR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_apply.pyRsN( t__doc__R Rtpgen2RRt fixer_utilRRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_apply.pyts fixes/fix_raise.py000066600000005000150501042300010172 0ustar00"""Fixer for 'raise E, V, T' raise -> raise raise E -> raise E raise E, V -> raise E(V) raise E, V, T -> raise E(V).with_traceback(T) raise (((E, E'), E''), E'''), V -> raise E(V) raise "foo", V, T -> warns about string exceptions CAVEATS: 1) "raise E, V" will be incorrectly translated if V is an exception instance. The correct Python 3 idiom is raise E from V but since we can't detect instance-hood by syntax alone and since any client code would have to be changed as well, we don't automate this. """ # Author: Collin Winter # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, Call, Attr, ArgList, is_tuple class FixRaise(fixer_base.BaseFix): PATTERN = """ raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] > """ def transform(self, node, results): syms = self.syms exc = results["exc"].clone() if exc.type is token.STRING: self.cannot_convert(node, "Python 3 does not support string exceptions") return # Python 2 supports # raise ((((E1, E2), E3), E4), E5), V # as a synonym for # raise E1, V # Since Python 3 will not support this, we recurse down any tuple # literals, always taking the first element. if is_tuple(exc): while is_tuple(exc): # exc.children[1:-1] is the unparenthesized tuple # exc.children[1].children[0] is the first element of the tuple exc = exc.children[1].children[0].clone() exc.prefix = " " if "val" not in results: # One-argument raise new = pytree.Node(syms.raise_stmt, [Name(u"raise"), exc]) new.prefix = node.prefix return new val = results["val"].clone() if is_tuple(val): args = [c.clone() for c in val.children[1:-1]] else: val.prefix = u"" args = [val] if "tb" in results: tb = results["tb"].clone() tb.prefix = u"" e = Call(exc, args) with_tb = Attr(e, Name(u'with_traceback')) + [ArgList([tb])] new = pytree.Node(syms.simple_stmt, [Name(u"raise")] + with_tb) new.prefix = node.prefix return new else: return pytree.Node(syms.raise_stmt, [Name(u"raise"), Call(exc, args)], prefix=node.prefix) fixes/fix_imports2.py000066600000000441150501042300010652 0ustar00"""Fix incompatible imports and module references that must be fixed after fix_imports.""" from . import fix_imports MAPPING = { 'whichdb': 'dbm', 'anydbm': 'dbm', } class FixImports2(fix_imports.FixImports): run_order = 7 mapping = MAPPING fixes/fix_xreadlines.pyc000066600000002176150501042300011403 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(spFix "for x in f.xreadlines()" -> "for x in f". This fixer will also convert g(f.xreadlines) into g(f.__iter__).i(t fixer_base(tNamet FixXreadlinescBseZdZdZRS(s power< call=any+ trailer< '.' 'xreadlines' > trailer< '(' ')' > > | power< any+ trailer< '.' no_call='xreadlines' > > cCsl|id}|o |itdd|in3|ig}|dD]}||iqK~dS(Ntno_callu__iter__tprefixtcall(tgettreplaceRRtclone(tselftnodetresultsRt_[1]tx((s4/usr/lib64/python2.6/lib2to3/fixes/fix_xreadlines.pyt transforms (t__name__t __module__tPATTERNR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_xreadlines.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_xreadlines.pytsfixes/fix_future.pyo000066600000001602150501042300010564 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sVRemove __future__ imports from __future__ import foo is replaced with an empty line. i(t fixer_base(t BlankLinet FixFuturecBseZdZdZdZRS(s;import_from< 'from' module_name="__future__" 'import' any >i cCst}|i|_|S(N(Rtprefix(tselftnodetresultstnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_future.pyt transforms  (t__name__t __module__tPATTERNt run_orderR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_future.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_future.pytsfixes/fix_repr.pyo000066600000001745150501042300010232 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(s/Fixer that transforms `xyzzy` into repr(xyzzy).i(t fixer_base(tCalltNamet parenthesizetFixReprcBseZdZdZRS(s7 atom < '`' expr=any '`' > cCsU|di}|i|iijot|}nttd|gd|iS(Ntexprureprtprefix(tclonettypetsymst testlist1RRRR(tselftnodetresultsR((s./usr/lib64/python2.6/lib2to3/fixes/fix_repr.pyt transforms(t__name__t __module__tPATTERNR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_repr.pyR sN( t__doc__tRt fixer_utilRRRtBaseFixR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_repr.pytsfixes/fix_types.pyc000066600000004207150501042300010406 0ustar00 Lc@sdZddklZddklZddklZhdd6dd6d d 6d d 6d d6d d6dd6dd6dd6dd6dd6dd6dd6dd6dd 6d!d"6d#d$6d%d&6d d'6d#d(6d)d*6ZgZeD]Z ed+e q[Z d,ei fd-YZ d.S(/sFixer for removing uses of the types module. These work for only the known names in the types module. The forms above can include types. or not. ie, It is assumed the module is imported either as: import types from types import ... # either * or specific types The import statements are not modified. There should be another fixer that handles at least the following constants: type([]) -> list type(()) -> tuple type('') -> str i(ttoken(t fixer_base(tNametboolt BooleanTypet memoryviewt BufferTypettypet ClassTypetcomplext ComplexTypetdicttDictTypetDictionaryTypestype(Ellipsis)t EllipsisTypetfloatt FloatTypetinttIntTypetlisttListTypetLongTypetobjectt ObjectTypes type(None)tNoneTypestype(NotImplemented)tNotImplementedTypetslicet SliceTypetbytest StringTypetstrt StringTypesttuplet TupleTypetTypeTypet UnicodeTypetranget XRangeTypes)power< 'types' trailer< '.' name='%s' > >tFixTypescBs eZdieZdZRS(t|cCs;tti|di}|ot|d|iSdS(Ntnametprefix(tunicodet _TYPE_MAPPINGtgettvalueRR)tNone(tselftnodetresultst new_value((s//usr/lib64/python2.6/lib2to3/fixes/fix_types.pyt transform:s(t__name__t __module__tjoint_patstPATTERNR3(((s//usr/lib64/python2.6/lib2to3/fixes/fix_types.pyR&6sN( t__doc__tpgen2RtRt fixer_utilRR+t_[1]ttR7tBaseFixR&(((s//usr/lib64/python2.6/lib2to3/fixes/fix_types.pyts6 %fixes/fix_numliterals.pyc000066600000002370150501042300011600 0ustar00 Lc@sSdZddklZddklZddklZdeifdYZdS(s-Fixer that turns 1L into 1, 0755 into 0o755. i(ttoken(t fixer_base(tNumbertFixNumliteralscBs#eZeiZdZdZRS(cCs$|iidp|iddjS(Nu0iuLl(tvaluet startswith(tselftnode((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pytmatchscCs|i}|ddjo|d }nI|ido8|io+tt|djod|d}nt|d|iS(NiuLlu0iu0otprefix(RRtisdigittlentsetRR (RRtresultstval((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pyt transforms  6(t__name__t __module__RtNUMBERt _accept_typeRR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pyR s  N( t__doc__tpgen2RtRt fixer_utilRtBaseFixR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pytsfixes/fix_unicode.pyo000066600000002222150501042300010677 0ustar00 Lc@srdZddkZddklZddklZhdd6dd 6Zeid Zd ei fd YZ dS( sJFixer that changes unicode to str, unichr to chr, and u"..." into "...". iNi(ttoken(t fixer_baseuchruunichrustruunicodeu[uU][rR]?[\'\"]t FixUnicodecBseZdZdZRS(sSTRING | 'unicode' | 'unichr'cCs|itijo!|i}t|i|_|S|itijo8ti|io!|i}|id|_|SndS(Ni( ttypeRtNAMEtclonet_mappingtvaluetSTRINGt _literal_retmatch(tselftnodetresultstnew((s1/usr/lib64/python2.6/lib2to3/fixes/fix_unicode.pyt transforms  (t__name__t __module__tPATTERNR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_unicode.pyR s( t__doc__tretpgen2RtRRtcompileR tBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_unicode.pyts  fixes/fix_long.pyc000066600000001466150501042300010205 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s/Fixer that turns 'long' into 'int' everywhere. i(t fixer_base(tis_probably_builtintFixLongcBseZdZdZRS(s'long'cCs(t|od|_|indS(Nuint(Rtvaluetchanged(tselftnodetresults((s./usr/lib64/python2.6/lib2to3/fixes/fix_long.pyt transforms  (t__name__t __module__tPATTERNR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_long.pyR sN(t__doc__tlib2to3Rtlib2to3.fixer_utilRtBaseFixR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_long.pytsfixes/fix_renames.py000066600000004221150501042300010525 0ustar00"""Fix incompatible renames Fixes: * sys.maxint -> sys.maxsize """ # Author: Christian Heimes # based on Collin Winter's fix_import # Local imports from .. import fixer_base from ..fixer_util import Name, attr_chain MAPPING = {"sys": {"maxint" : "maxsize"}, } LOOKUP = {} def alternates(members): return "(" + "|".join(map(repr, members)) + ")" def build_pattern(): #bare = set() for module, replace in MAPPING.items(): for old_attr, new_attr in replace.items(): LOOKUP[(module, old_attr)] = new_attr #bare.add(module) #bare.add(old_attr) #yield """ # import_name< 'import' (module=%r # | dotted_as_names< any* module=%r any* >) > # """ % (module, module) yield """ import_from< 'from' module_name=%r 'import' ( attr_name=%r | import_as_name< attr_name=%r 'as' any >) > """ % (module, old_attr, old_attr) yield """ power< module_name=%r trailer< '.' attr_name=%r > any* > """ % (module, old_attr) #yield """bare_name=%s""" % alternates(bare) class FixRenames(fixer_base.BaseFix): PATTERN = "|".join(build_pattern()) order = "pre" # Pre-order tree traversal # Don't match the node if it's within another match def match(self, node): match = super(FixRenames, self).match results = match(node) if results: if any(match(obj) for obj in attr_chain(node, "parent")): return False return results return False #def start_tree(self, tree, filename): # super(FixRenames, self).start_tree(tree, filename) # self.replace = {} def transform(self, node, results): mod_name = results.get("module_name") attr_name = results.get("attr_name") #bare_name = results.get("bare_name") #import_mod = results.get("module") if mod_name and attr_name: new_attr = unicode(LOOKUP[(mod_name.value, attr_name.value)]) attr_name.replace(Name(new_attr, prefix=attr_name.prefix)) fixes/fix_buffer.pyc000066600000001652150501042300010514 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s4Fixer that changes buffer(...) into memoryview(...).i(t fixer_base(tNamet FixBuffercBseZeZdZdZRS(sR power< name='buffer' trailer< '(' [any] ')' > any* > cCs*|d}|itdd|idS(Ntnameu memoryviewtprefix(treplaceRR(tselftnodetresultsR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_buffer.pyt transforms (t__name__t __module__tTruetexplicittPATTERNR (((s0/usr/lib64/python2.6/lib2to3/fixes/fix_buffer.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_buffer.pytsfixes/fix_long.py000066600000000705150501042300010035 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that turns 'long' into 'int' everywhere. """ # Local imports from lib2to3 import fixer_base from lib2to3.fixer_util import is_probably_builtin class FixLong(fixer_base.BaseFix): PATTERN = "'long'" def transform(self, node, results): if is_probably_builtin(node): node.value = u"int" node.changed() fixes/fix_print.pyo000066600000005135150501042300010413 0ustar00 Lc@sdZddklZddklZddklZddklZddklZl Z l Z l Z l Z ei dZdeifd YZd S( s Fixer for print. Change: 'print' into 'print()' 'print ...' into 'print(...)' 'print ... ,' into 'print(..., end=" ")' 'print >>x, ...' into 'print(..., file=x)' No changes are applied if print_function is imported from __future__ i(tpatcomp(tpytree(ttoken(t fixer_base(tNametCalltCommatStringtis_tuples"atom< '(' [atom|STRING|NAME] ')' >tFixPrintcBs eZdZdZdZRS(sP simple_stmt< any* bare='print' any* > | print_stmt c Cs|id}|o*|ittdgd|idS|id}t|djoti|dodSd}}}|o(|dt jo|d }d}n|o>|dt i t idjo|di}|d }ng}|D]} || iq~} | od | d_n|dj p|dj p |dj o|dj o#|i| d tt|n|dj o#|i| d tt|n|dj o|i| d |qnttd| } |i| _| S(Ntbareuprinttprefixiiit u>>iuusepuendufile(tgettreplaceRRR tchildrentlent parend_exprtmatchtNoneRRtLeafRt RIGHTSHIFTtclonet add_kwargRtrepr( tselftnodetresultst bare_printtargstseptendtfilet_[1]targtl_argstn_stmt((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyt transform#s8  '  ''' # #  cCstd|_ti|iit|titid|f}|o|i t d|_n|i |dS(Nuu=u ( R RtNodetsymstargumentRRRtEQUALtappendR(Rtl_nodests_kwdtn_exprt n_argument((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyRKs    (t__name__t __module__tPATTERNR%R(((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyR s (N(t__doc__tRRtpgen2RRt fixer_utilRRRRRtcompile_patternRtBaseFixR (((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyts( fixes/fix_tuple_params.pyo000066600000012557150501042300011761 0ustar00 Lc@sdZddklZddklZddklZddklZlZl Z l Z l Z l Z dZ deifdYZd Zd Zgd d Zd Zd S(s:Fixer for function definitions with tuple parameters. def func(((a, b), c), d): ... -> def func(x, d): ((a, b), c) = x ... It will also support lambdas: lambda (x, y): x + y -> lambda t: t[0] + t[1] # The parens are a syntax error in Python 3 lambda (x): x + y -> lambda x: x + y i(tpytree(ttoken(t fixer_base(tAssigntNametNewlinetNumbert SubscripttsymscCs*t|tio|iditijS(Ni(t isinstanceRtNodetchildrenttypeRtSTRING(tstmt((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyt is_docstringstFixTupleParamscBs eZdZdZdZRS(s funcdef< 'def' any parameters< '(' args=any ')' > ['->' any] ':' suite=any+ > | lambda= lambdef< 'lambda' args=vfpdef< '(' inner=any ')' > ':' body=any > c s@d|joi||Sg|d}|d}|diditijo'd}|didi}tn"d}d}titidt fd }|it i jo||ne|it i joQxNt |iD]9\}} | it i jo|| d |djqqWnpdSxD]} |d| _qNW|} |djod d_n4t|di|o|d_|d} nxD]} |d| _qW|di| | +x=t| d| tdD]}||di|_qW|didS( Ntlambdatsuitetargsiiiu; ucsti}|i}d|_t||i}|o d|_n|i|itit i |igdS(Nuu ( Rtnew_nametclonetprefixRtreplacetappendRR Rt simple_stmt(t tuple_argt add_prefixtntargR(t new_linestendtself(s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyt handle_tuple?s    Ru (ttransform_lambdaR R RtINDENTtvalueRRtLeaftFalseRttfpdeft typedargslistt enumeratetparentRRtrangetlentchanged( R tnodetresultsRRtstarttindentR!tiRtlinetafter((RR Rs6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyt transform*sN      "  "cCs[|d}|d}t|d}|itijo'|i}d|_|i|dSt|}t|}|i t |}t |dd} |i| ix|i D]} | itijo}| i |jomg} || i D]} | | iq~ } titi| ig| }| i|_| i|qqWdS(NRtbodytinneru R(t simplify_argsR RtNAMERRRt find_paramst map_to_indexRt tuple_nameRt post_orderR$RR Rtpower(R R.R/RR6R7tparamstto_indexttup_namet new_paramRt_[1]tct subscriptstnew((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyR"js*        #.  (t__name__t __module__tPATTERNR5R"(((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyRs  @cCsu|ititifjo|S|itijo-x%|itijo|id}q7W|Std|dS(NisReceived unexpected node %s(R RtvfplistRR9tvfpdefR t RuntimeError(R.((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyR8scCs|itijot|idS|itijo|iSg}|iD]*}|itijo|t|qNqN~S(Ni( R RRKR:R RR9R$tCOMMA(R.RCRD((s6/usr/lib64/python2.6/lib2to3/fixes/fix_tuple_params.pyR:s cCs|djo h}nxjt|D]\\}}ttt|g}t|tot||d|q$||||s. h  fixes/fix_funcattrs.pyo000066600000002105150501042300011262 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s3Fix function attribute names (f.func_x -> f.__x__).i(t fixer_base(tNamet FixFuncattrscBseZdZdZRS(s power< any+ trailer< '.' attr=('func_closure' | 'func_doc' | 'func_globals' | 'func_name' | 'func_defaults' | 'func_code' | 'func_dict') > any* > cCs9|dd}|itd|idd|idS(Ntattriu__%s__itprefix(treplaceRtvalueR(tselftnodetresultsR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_funcattrs.pyt transforms(t__name__t __module__tPATTERNR (((s3/usr/lib64/python2.6/lib2to3/fixes/fix_funcattrs.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_funcattrs.pytsfixes/fix_set_literal.pyc000066600000003702150501042300011550 0ustar00 Lc@sOdZddklZlZddklZlZdeifdYZdS(s: Optional fixer to transform set() calls to set literals. i(t fixer_basetpytree(ttokentsymst FixSetLiteralcBseZeZdZdZRS(sjpower< 'set' trailer< '(' (atom=atom< '[' (items=listmaker< any ((',' any)* [',']) > | single=any) ']' > | atom< '(' items=testlist_gexp< any ((',' any)* [',']) > ')' > ) ')' > > c Cs|id}|o5titi|ig}|i||}n |d}titi dg}|i d|i D|i titi d|ii|d_titi|}|i|_t|i djo.|i d}|i|i|i d_n|S( Ntsingletitemsu{cssx|]}|iVqWdS(N(tclone(t.0tn((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pys &s u}iii(tgetRtNodeRt listmakerRtreplacetLeafRtLBRACEtextendtchildrentappendtRBRACEt next_siblingtprefixt dictsetmakertlentremove( tselftnodetresultsRtfakeRtliteraltmakerR ((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pyt transforms"      (t__name__t __module__tTruetexplicittPATTERNR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pyR s N( t__doc__tlib2to3RRtlib2to3.fixer_utilRRtBaseFixR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pytsfixes/fix_imports.pyo000066600000012351150501042300010752 0ustar00 Lc@sdZddklZddklZlZh0dd6dd6dd6d d 6d d 6d d6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd 6d!d"6d#d$6d%d&6d'd(6d)d*6d+d,6d-d.6d/d06d1d26d3d46d5d66d7d86d9d:6d;d<6d=d>6d?d@6dAdB6dCdD6dCdE6dFdG6dHdI6dJdK6dLdM6dNdO6dPdQ6dPdR6dPdS6dTdU6dVdW6dVdX6dYdZ6d[d\6Zd]Zed^Zd_ei fd`YZ daS(bs/Fix incompatible imports and module references.i(t fixer_base(tNamet attr_chaintiotStringIOt cStringIOtpickletcPickletbuiltinst __builtin__tcopyregtcopy_regtqueuetQueuet socketservert SocketServert configparsert ConfigParsertreprlibtreprstkinter.filedialogt FileDialogt tkFileDialogstkinter.simpledialogt SimpleDialogttkSimpleDialogstkinter.colorchooserttkColorChooserstkinter.commondialogttkCommonDialogstkinter.dialogtDialogs tkinter.dndtTkdnds tkinter.fontttkFontstkinter.messageboxt tkMessageBoxstkinter.scrolledtextt ScrolledTextstkinter.constantst Tkconstantss tkinter.tixtTixs tkinter.ttktttkttkintertTkintert _markupbaset markupbasetwinregt_winregt_threadtthreadt _dummy_threadt dummy_threadsdbm.bsdtdbhashsdbm.dumbtdumbdbmsdbm.ndbmtdbmsdbm.gnutgdbms xmlrpc.clientt xmlrpclibs xmlrpc.servertDocXMLRPCServertSimpleXMLRPCServers http.clientthttplibs html.entitiesthtmlentitydefss html.parsert HTMLParsers http.cookiestCookieshttp.cookiejart cookielibs http.servertBaseHTTPServertSimpleHTTPServert CGIHTTPServert subprocesstcommandst collectionst UserStringtUserLists urllib.parseturlparsesurllib.robotparsert robotparsercCsdditt|dS(Nt(t|t)(tjointmapR(tmembers((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyt alternates=sccstdig}|D]}|d|q~}t|i}d||fVd|Vd||fVd|VdS(Ns | smodule_name='%s'syname_import=import_name< 'import' ((%s) | multiple_imports=dotted_as_names< any* (%s) any* >) > simport_from< 'from' (%s) 'import' ['('] ( any | import_as_name< any 'as' any > | import_as_names< any* >) [')'] > simport_name< 'import' (dotted_as_name< (%s) 'as' any > | multiple_imports=dotted_as_names< any* dotted_as_name< (%s) 'as' any > any* >) > s3power< bare_with_attr=(%s) trailer<'.' any > any* >(RERHtkeys(tmappingt_[1]tkeytmod_listt bare_names((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyt build_patternAs . t FixImportscBsAeZeZdZdZdZdZdZdZ RS(icCsdit|iS(NRC(RERORJ(tself((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyRO^scCs&|i|_tt|idS(N(ROtPATTERNtsuperRPtcompile_pattern(RQ((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyRTascsftt|i|}|o=d|jo+tfdt|dDotS|StS(Ntbare_with_attrc3sx|]}|VqWdS(N((t.0tobj(tmatch(s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pys os tparent(RSRPRXtanyRtFalse(RQtnodetresults((RXs1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyRXhs  &cCs&tt|i||h|_dS(N(RSRPt start_treetreplace(RQttreetfilename((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyR^tscCs|id}|o|i}t|i|}|it|d|id|jo||i|sj    fixes/fix_methodattrs.pyo000066600000002135150501042300011612 0ustar00 Lc@s^dZddklZddklZhdd6dd6dd 6Zd eifd YZd S( s;Fix bound method attributes (method.im_? -> method.__?__). i(t fixer_base(tNamet__func__tim_funct__self__tim_selfs__self__.__class__tim_classtFixMethodattrscBseZdZdZRS(sU power< any+ trailer< '.' attr=('im_func' | 'im_self' | 'im_class') > any* > cCsA|dd}tt|i}|it|d|idS(Ntattritprefix(tunicodetMAPtvaluetreplaceRR (tselftnodetresultsRtnew((s5/usr/lib64/python2.6/lib2to3/fixes/fix_methodattrs.pyt transforms(t__name__t __module__tPATTERNR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_methodattrs.pyRsN(t__doc__tRt fixer_utilRR tBaseFixR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_methodattrs.pyts fixes/fix_future.py000066600000001011150501042300010377 0ustar00"""Remove __future__ imports from __future__ import foo is replaced with an empty line. """ # Author: Christian Heimes # Local imports from .. import fixer_base from ..fixer_util import BlankLine class FixFuture(fixer_base.BaseFix): PATTERN = """import_from< 'from' module_name="__future__" 'import' any >""" # This should be run last -- some things check for the import run_order = 10 def transform(self, node, results): new = BlankLine() new.prefix = node.prefix return new fixes/fix_methodattrs.pyc000066600000002135150501042300011576 0ustar00 Lc@s^dZddklZddklZhdd6dd6dd 6Zd eifd YZd S( s;Fix bound method attributes (method.im_? -> method.__?__). i(t fixer_base(tNamet__func__tim_funct__self__tim_selfs__self__.__class__tim_classtFixMethodattrscBseZdZdZRS(sU power< any+ trailer< '.' attr=('im_func' | 'im_self' | 'im_class') > any* > cCsA|dd}tt|i}|it|d|idS(Ntattritprefix(tunicodetMAPtvaluetreplaceRR (tselftnodetresultsRtnew((s5/usr/lib64/python2.6/lib2to3/fixes/fix_methodattrs.pyt transforms(t__name__t __module__tPATTERNR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_methodattrs.pyRsN(t__doc__tRt fixer_utilRR tBaseFixR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_methodattrs.pyts fixes/fix_execfile.pyc000066600000003774150501042300011036 0ustar00 Lc@sydZddklZddklZlZlZlZlZl Z l Z l Z l Z l Z deifdYZdS(soFixer for execfile. This converts usages of the execfile function into calls to the built-in exec() function. i(t fixer_base( tCommatNametCalltLParentRParentDottNodetArgListtStringtsymst FixExecfilecBseZdZdZRS(s power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > | power< 'execfile' trailer< '(' filename=any ')' > > cCs|pt|d}|id}|id}|ididi}t|igd|}ttitd|g}tti t tdgtti t t gg} |g| } |i} d| _ td d} | t| t| g} ttd | d }|g}|dj o |it|ign|dj o |it|ignttd |d |i S(Ntfilenametglobalstlocalsitrparenuopenureadu u'exec'ucompileuuexectprefix(tAssertionErrortgettchildrentcloneRRR tpowerRttrailerRRRRR RRtNonetextend(tselftnodetresultsR R Rtexecfile_parent open_argst open_calltreadt open_exprt filename_argtexec_strt compile_argst compile_calltargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_execfile.pyt transforms* !        (t__name__t __module__tPATTERNR&(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_execfile.pyR sN(t__doc__tRt fixer_utilRRRRRRRRR R tBaseFixR (((s2/usr/lib64/python2.6/lib2to3/fixes/fix_execfile.pytsFfixes/fix_urllib.pyo000066600000014503150501042300010547 0ustar00 Lc@sgdZddklZlZddklZddklZlZl Z l Z l Z hdddd d d d d dgfddddddddddddddddgfdd gfgd!6dd d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7d8gfdd9d:gfgd;6Z e d;i e d!dd<Zd=efd>YZd?S(@sFix changes imports of urllib which are now incompatible. This is rather similar to fix_imports, but because of the more complex nature of the fixing for urllib, it has its own fixer. i(t alternatest FixImportsi(t fixer_base(tNametCommat FromImporttNewlinet attr_chainsurllib.requestt URLOpenertFancyURLOpenert urlretrievet _urlopenerturlopent urlcleanupt pathname2urlt url2pathnames urllib.parsetquotet quote_plustunquotet unquote_plust urlencodet splitattrt splithostt splitnportt splitpasswdt splitportt splitquerytsplittagt splittypet splitusert splitvalues urllib.errortContentTooShortErrorturllibtinstall_openert build_openertRequesttOpenerDirectort BaseHandlertHTTPDefaultErrorHandlertHTTPRedirectHandlertHTTPCookieProcessort ProxyHandlertHTTPPasswordMgrtHTTPPasswordMgrWithDefaultRealmtAbstractBasicAuthHandlertHTTPBasicAuthHandlertProxyBasicAuthHandlertAbstractDigestAuthHandlertHTTPDigestAuthHandlertProxyDigestAuthHandlert HTTPHandlert HTTPSHandlert FileHandlert FTPHandlertCacheFTPHandlertUnknownHandlertURLErrort HTTPErrorturllib2ccst}xtiD]w\}}xh|D]`}|\}}t|}d||fVd|||fVd|Vd|Vd||fVq)WqWdS(Nsimport_name< 'import' (module=%r | dotted_as_names< any* module=%r any* >) > simport_from< 'from' mod_member=%r 'import' ( member=%s | import_as_name< member=%s 'as' any > | import_as_names< members=any* >) > sIimport_from< 'from' module_star=%r 'import' star='*' > stimport_name< 'import' dotted_as_name< module_as=%r 'as' any > > sKpower< bare_with_attr=%r trailer< '.' member=%s > any* > (tsettMAPPINGtitemsR(tbaret old_moduletchangestchanget new_moduletmembers((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyt build_pattern0s      t FixUrllibcBs5eZdZdZdZdZdZRS(cCsditS(Nt|(tjoinRD(tself((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyRDIscCs|id}|i}g}x?t|id D],}|it|dd|tgq0W|itt|iddd||i|dS(sTransform for the basic import case. Replaces the old import name with a comma separated list of its replacements. tmoduleiitprefixN( tgetRJR<tvaluetextendRRtappendtreplace(RHtnodetresultst import_modtpreftnamestname((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyttransform_importLs *(c Cs|id}|i}|id}|ot|to|d}nd }x8t|iD])}|i|djo|d}Pq`q`W|o|it|d|q|i |dng}h} |id} x| D]}|i}|djo}xzt|iD]g}||djoP|d| jo| |di |qx|g| |d<|i |dqqWqqWg} x|D]} | | } g}x4| d D](}|i t|d|t gqW|i t| d d|| i t | |qW| oSg}x(| d D]}|i |tgq-W|i | d |i|n|i |d d S( sTransform for imports of specific module elements. Replaces the module to be imported from with the appropriate new module. t mod_membertmemberiiRJs!This is an invalid module elementRCt,isAll module elements are invalidN(RKRJt isinstancetlisttNoneR<RLRORtcannot_convertRNRMRRR(RHRPRQRWRSRXtnew_nameRAtmodulestmod_dictRCt new_nodesRIteltsRTtelttnodestnew_node((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyttransform_member\s`       !  & cCs|id}|id}d}t|to|d}nx8t|iD])}|i|djo|d}PqPqPW|o |it|d|in|i |ddS(s.Transform for calls to module members in code.tbare_with_attrRXiiRJs!This is an invalid module elementN( RKR\RZR[R<RLRORRJR](RHRPRQt module_dotRXR^RA((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyt transform_dots  cCs|ido|i||n|ido|i||nm|ido|i||nI|ido|i|dn%|ido|i|dndS(NRIRWRgt module_starsCannot handle star imports.t module_ass#This module is now multiple modules(RKRVRfRiR](RHRPRQ((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyt transforms(t__name__t __module__RDRVRfRiRl(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pyREGs    < N(t__doc__t fix_importsRRtRt fixer_utilRRRRRR<RNRDRE(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_urllib.pytsD(           fixes/fix_exitfunc.pyc000066600000005216150501042300011070 0ustar00 Lc@sgdZddklZlZddklZlZlZlZl Z l Z dei fdYZ dS(s7 Convert use of sys.exitfunc to use the atexit module. i(tpytreet fixer_base(tNametAttrtCalltCommatNewlinetsymst FixExitfunccBs)eZdZdZdZdZRS(s ( sys_import=import_name<'import' ('sys' | dotted_as_names< (any ',')* 'sys' (',' any)* > ) > | expr_stmt< power< 'sys' trailer< '.' 'exitfunc' > > '=' func=any > ) cGstt|i|dS(N(tsuperRt__init__(tselftargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyR scCs&tt|i||d|_dS(N(R Rt start_treetNonet sys_import(R ttreetfilename((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyR sc Csd|jo&|idjo|d|_ndS|di}d|_tititt dt d}t ||g|i}|i ||idjo|i |ddS|ii d}|itijo*|it|it ddn|ii}|i i|i}|i} titit d t ddg} titi| g} |i|dt|i|d | dS( NRtfuncuuatexituregistersKCan't find sys import; Please add an atexit import at the top of your file.iu uimporti(RRtclonetprefixRtNodeRtpowerRRRtreplacetwarningtchildrenttypetdotted_as_namest append_childRtparenttindext import_namet simple_stmtt insert_childR( R tnodetresultsRtregistertcalltnamestcontaining_stmttpositiontstmt_containert new_importtnew((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyt transform#s2       (t__name__t __module__tPATTERNR R R,(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyR s  N( t__doc__tlib2to3RRtlib2to3.fixer_utilRRRRRRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyts.fixes/fix_metaclass.pyc000066600000015005150501042300011214 0ustar00 Lc@sdZddklZddklZddklZlZlZl Z dZ dZ dZ dZ d Zd Zd eifd YZd S(sFixer for __metaclass__ = X -> (metaclass=X) methods. The various forms of classef (inherits nothing, inherits once, inherints many) don't parse the same in the CST so we look at ALL classes for a __metaclass__ and if we find one normalize the inherits to all be an arglist. For one-liner classes ('class X: pass') there is no indent/dedent so we normalize those into having a suite. Moving the __metaclass__ into the classdef can also cause the class body to be empty so there is some special casing for that as well. This fixer also tries very hard to keep original indenting and spacing in all those corner cases. i(t fixer_base(ttoken(tNametsymstNodetLeafcCsx|iD]}|itijo t|S|itijon|iod|id}|itijo@|io6|id}t|to|i djot Sqq q Wt S(s we have to check the cls_node without changing it. There are two possiblities: 1) clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta') 2) clsdef => simple_stmt => expr_stmt => Leaf('__meta') it __metaclass__( tchildrenttypeRtsuitet has_metaclasst simple_stmtt expr_stmtt isinstanceRtvaluetTruetFalse(tparenttnodet expr_nodet left_side((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyR s    cCsx)|iD]}|itijodSq WxAt|iD]$\}}|itijoPq<q<Wtdttig}xE|i|do2|i|d}|i |i |i qW|i ||}dS(sf one-line classes don't get a suite in the parse tree so we add one to normalize the tree NsNo class suite and no ':'!i( RRRR t enumerateRtCOLONt ValueErrorRt append_childtclonetremove(tcls_nodeRtiR t move_node((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pytfixup_parse_tree-s"      c Csx9t|iD]$\}}|itijoPqqWdS|ittig}tti |g}x=|i|o.|i|}|i |i |iqpW|i |||idid}|idid} | i |_ dS(s if there is a semi-colon all the parts count as part of the same simple_stmt. We just want the __metaclass__ part so we move everything efter the semi-colon into its own simple_stmt node Ni(RRRRtSEMIRRRR R RRt insert_childtprefix( RRt stmt_nodetsemi_indRtnew_exprtnew_stmtRt new_leaf1t old_leaf1((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pytfixup_simple_stmtGs"    cCs=|io/|iditijo|idindS(Ni(RRRtNEWLINER(R((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pytremove_trailing_newline_s$ccs x5|iD]}|itijoPq q Wtdxtt|iD]\}}|itijo|io|id}|itijog|io]|id}t |t o<|i djo,t |||t ||||fVqqqNqNWdS(NsNo class suite!iu __metaclass__(RRRR RtlistRR R R RRR(R*(RRRt simple_nodeRt left_node((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyt find_metasds        cCs|iddd}x0|o(|i}|itijoPqqWxt|ol|i}t|to/|itijo|io d|_ndS|i |idddqLWdS(s If an INDENT is followed by a thing with a prefix then nuke the prefix Otherwise we get in trouble when removing __metaclass__ at suite start Niu( RtpopRRtINDENTR RtDEDENTR!textend(R tkidsR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyt fixup_indent{s   #  t FixMetaclasscBseZdZdZRS(s classdef cCs;t|pdSt|d}x-t|D]\}}}|}|iq/W|idi}t|idjog|iditi jo|id}q|idi } t ti | g}|i d|nt|idjo&t ti g}|i d|nt|idjo^t ti g}|i dttid|i d||i dttidn td |idid} d | _| i} |io&|ittid d | _n d | _|id} | itijptd | id_d | id_|i|t||ipL|it|d} | | _|i| |ittidnt|idjos|iditijoY|iditijo?t|d} |i d| |i dttidndS(Niiiiiiu)u(sUnexpected class definitiont metaclassu,u uiupassu ii(R RtNoneR.RRRtlenRtarglistRRt set_childR RRtRPARtLPARRRR!RtCOMMAR tAssertionErrorR4R)R0R1(tselfRtresultstlast_metaclassR Rtstmtt text_typeR9Rtmeta_txttorig_meta_prefixR t pass_leaf((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyt transformsb                 (t__name__t __module__tPATTERNRG(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyR5sN(t__doc__tRtpygramRt fixer_utilRRRRR RR(R*R.R4tBaseFixR5(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyts"      fixes/fix_buffer.py000066600000001066150501042300010350 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that changes buffer(...) into memoryview(...).""" # Local imports from .. import fixer_base from ..fixer_util import Name class FixBuffer(fixer_base.BaseFix): explicit = True # The user must ask for this fixer PATTERN = """ power< name='buffer' trailer< '(' [any] ')' > any* > """ def transform(self, node, results): name = results["name"] name.replace(Name(u"memoryview", prefix=name.prefix)) fixes/fix_raise.pyc000066600000004504150501042300010345 0ustar00 Lc@s{dZddklZddklZddklZddklZlZl Z l Z l Z dei fdYZ dS( s-Fixer for 'raise E, V, T' raise -> raise raise E -> raise E raise E, V -> raise E(V) raise E, V, T -> raise E(V).with_traceback(T) raise (((E, E'), E''), E'''), V -> raise E(V) raise "foo", V, T -> warns about string exceptions CAVEATS: 1) "raise E, V" will be incorrectly translated if V is an exception instance. The correct Python 3 idiom is raise E from V but since we can't detect instance-hood by syntax alone and since any client code would have to be changed as well, we don't automate this. i(tpytree(ttoken(t fixer_base(tNametCalltAttrtArgListtis_tupletFixRaisecBseZdZdZRS(sB raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] > c Cs|i}|di}|itijo|i|ddSt|o<x,t|o|ididi}qQWd|_nd|jo2t i |i t d|g}|i|_|S|di}t|o5g}|idd!D]}||iq~} nd |_|g} d |jo|d i} d | _t || } t| t d t| gg} t i |it dg| }|i|_|St i |i t dt || gd |iSdS( Ntexcs+Python 3 does not support string exceptionsiit tvaluraiseiuttbuwith_tracebacktprefix(tsymstclonettypeRtSTRINGtcannot_convertRtchildrenR RtNodet raise_stmtRRRRt simple_stmt( tselftnodetresultsRR tnewR t_[1]tctargsR tetwith_tb((s//usr/lib64/python2.6/lib2to3/fixes/fix_raise.pyt transform$s<     !  5    %"  (t__name__t __module__tPATTERNR (((s//usr/lib64/python2.6/lib2to3/fixes/fix_raise.pyRsN(t__doc__tRtpgen2RRt fixer_utilRRRRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_raise.pyts (fixes/fix_callable.pyo000066600000002646150501042300011022 0ustar00 Lc@s[dZddklZddklZlZlZlZlZdei fdYZ dS(sFixer for callable(). This converts callable(obj) into isinstance(obj, collections.Callable), adding a collections import if needed.i(t fixer_base(tCalltNametStringtAttrt touch_importt FixCallablecBseZdZdZRS(s power< 'callable' trailer< lpar='(' ( not(arglist | argument) any ','> ) rpar=')' > after=any* > cCss|d}tddd||itdg}|ittdtdttd|d|iS(Ntfuncu collectionstnodeu, uCallableu isinstancetprefix( RtNonetcloneRtextendRRRR (tselfRtresultsRtargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_callable.pyt transforms  "(t__name__t __module__tPATTERNR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_callable.pyR s N( t__doc__tlib2to3Rtlib2to3.fixer_utilRRRRRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_callable.pyts(fixes/fix_standarderror.py000066600000000652150501042300011751 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for StandardError -> Exception.""" # Local imports from .. import fixer_base from ..fixer_util import Name class FixStandarderror(fixer_base.BaseFix): PATTERN = """ 'StandardError' """ def transform(self, node, results): return Name(u"Exception", prefix=node.prefix) fixes/fix_paren.pyo000066600000002762150501042300010367 0ustar00 Lc@sIdZddklZddklZlZdeifdYZdS(suFixer that addes parentheses where they are required This converts ``[x for x in 1, 2]`` to ``[x for x in (1, 2)]``.i(t fixer_base(tLParentRParentFixParencBseZdZdZRS(s atom< ('[' | '(') (listmaker< any comp_for< 'for' NAME 'in' target=testlist_safe< any (',' any)+ [','] > [any] > > | testlist_gexp< any comp_for< 'for' NAME 'in' target=testlist_safe< any (',' any)+ [','] > [any] > >) (']' | ')') > cCsL|d}t}|i|_d|_|id||itdS(Nttargetui(Rtprefixt insert_childt append_childR(tselftnodetresultsRtlparen((s//usr/lib64/python2.6/lib2to3/fixes/fix_paren.pyt transform#s     (t__name__t __module__tPATTERNR (((s//usr/lib64/python2.6/lib2to3/fixes/fix_paren.pyR sN(t__doc__tRt fixer_utilRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_paren.pytsfixes/fix_exec.py000066600000001721150501042300010021 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for exec. This converts usages of the exec statement into calls to a built-in exec() function. exec code in ns1, ns2 -> exec(code, ns1, ns2) """ # Local imports from .. import pytree from .. import fixer_base from ..fixer_util import Comma, Name, Call class FixExec(fixer_base.BaseFix): PATTERN = """ exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > | exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > """ def transform(self, node, results): assert results syms = self.syms a = results["a"] b = results.get("b") c = results.get("c") args = [a.clone()] args[0].prefix = "" if b is not None: args.extend([Comma(), b.clone()]) if c is not None: args.extend([Comma(), c.clone()]) return Call(Name(u"exec"), args, prefix=node.prefix) fixes/fix_execfile.py000066600000003665150501042300010672 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for execfile. This converts usages of the execfile function into calls to the built-in exec() function. """ from .. import fixer_base from ..fixer_util import (Comma, Name, Call, LParen, RParen, Dot, Node, ArgList, String, syms) class FixExecfile(fixer_base.BaseFix): PATTERN = """ power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > | power< 'execfile' trailer< '(' filename=any ')' > > """ def transform(self, node, results): assert results filename = results["filename"] globals = results.get("globals") locals = results.get("locals") # Copy over the prefix from the right parentheses end of the execfile # call. execfile_paren = node.children[-1].children[-1].clone() # Construct open().read(). open_args = ArgList([filename.clone()], rparen=execfile_paren) open_call = Node(syms.power, [Name(u"open"), open_args]) read = [Node(syms.trailer, [Dot(), Name(u'read')]), Node(syms.trailer, [LParen(), RParen()])] open_expr = [open_call] + read # Wrap the open call in a compile call. This is so the filename will be # preserved in the execed code. filename_arg = filename.clone() filename_arg.prefix = u" " exec_str = String(u"'exec'", u" ") compile_args = open_expr + [Comma(), filename_arg, Comma(), exec_str] compile_call = Call(Name(u"compile"), compile_args, u"") # Finally, replace the execfile call with an exec call. args = [compile_call] if globals is not None: args.extend([Comma(), globals.clone()]) if locals is not None: args.extend([Comma(), locals.clone()]) return Call(Name(u"exec"), args, prefix=node.prefix) fixes/fix_isinstance.pyc000066600000003450150501042300011401 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s,Fixer that cleans up a tuple argument to isinstance after the tokens in it were fixed. This is mainly used to remove double occurrences of tokens as a leftover of the long -> int / unicode -> str conversion. eg. isinstance(x, (int, long)) -> isinstance(x, (int, int)) -> isinstance(x, int) i(t fixer_base(ttokent FixIsinstancecBseZdZdZdZRS(s power< 'isinstance' trailer< '(' arglist< any ',' atom< '(' args=testlist_gexp< any+ > ')' > > ')' > > ic Csbt}|d}|i}g}t|}x|D]\}} | itijoW| i|joG|t|djo,||ditijo|i q5qq5|i | | itijo|i | iq5q5W|o"|ditijo |d=nt|djo.|i } | i |d_ | i|dn||(|idS(Ntargsiii(tsettchildrent enumeratettypeRtNAMEtvaluetlentCOMMAtnexttappendtaddtparenttprefixtreplacetchanged( tselftnodetresultstnames_insertedttestlistRtnew_argstiteratortidxtargtatom((s4/usr/lib64/python2.6/lib2to3/fixes/fix_isinstance.pyt transforms,     #2     (t__name__t __module__tPATTERNt run_orderR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_isinstance.pyRs N(t__doc__tRt fixer_utilRtBaseFixR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_isinstance.pyt sfixes/fix_ne.py000066600000001075150501042300007501 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that turns <> into !=.""" # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base class FixNe(fixer_base.BaseFix): # This is so simple that we don't need the pattern compiler. _accept_type = token.NOTEQUAL def match(self, node): # Override return node.value == u"<>" def transform(self, node, results): new = pytree.Leaf(token.NOTEQUAL, u"!=", prefix=node.prefix) return new fixes/fix_import.pyc000066600000006314150501042300010555 0ustar00 Lc@szdZddklZddklZlZlZlZddkl Z l Z l Z dZ dei fdYZd S( sFixer for import statements. If spam is being imported from the local directory, this import: from spam import eggs Becomes: from .spam import eggs And this import: import spam Becomes: from . import spam i(t fixer_basei(tdirnametjointexiststsep(t FromImporttsymsttokenccs|g}x|o|i}|itijo |iVq |itijo3dig}|iD]}||iqe~Vq |iti jo|i |idq |iti jo!|i |idddq t dq WdS(sF Walks over all the names imported in a dotted_as_names node. tiNisunkown node type(tpopttypeRtNAMEtvalueRt dotted_nameRtchildrentdotted_as_nametappendtdotted_as_namestextendtAssertionError(tnamestpendingtnodet_[1]tch((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyttraverse_importss   3!t FixImportcBs)eZdZdZdZdZRS(sj import_from< 'from' imp=any 'import' ['('] any [')'] > | import_name< 'import' imp=any > cCs/tt|i||d|ij|_dS(Ntabsolute_import(tsuperRt start_treetfuture_featurestskip(tselfttreetname((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyR.scCs|iodS|d}|itijoZx"t|dp|id}q/W|i|iod|i|_|iqnt }t }x4t |D]&}|i|o t }qt }qW|o |o|i |dndSt d|g}|i|_|SdS(NtimpR iu.s#absolute and local imports together(RR Rt import_fromthasattrRtprobably_a_local_importR tchangedtFalseRtTruetwarningRtprefix(R RtresultsR#t have_localt have_absolutetmod_nametnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyt transform2s0      cCs|idotS|iddd}t|i}t||}ttt|dptSx6dtdddd gD]}t||otSqWtS( Nu.iis __init__.pys.pys.pycs.sos.sls.pyd( t startswithR(tsplitRtfilenameRRRR)(R timp_namet base_pathtext((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyR&Ts (t__name__t __module__tPATTERNRR1R&(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyR&s  "N(t__doc__RRtos.pathRRRRt fixer_utilRRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyt s " fixes/fix_intern.pyo000066600000003053150501042300010553 0ustar00 Lc@s_dZddklZddklZddklZlZlZdeifdYZ dS(s/Fixer for intern(). intern(s) -> sys.intern(s)i(tpytree(t fixer_base(tNametAttrt touch_importt FixInterncBseZdZdZRS(s power< 'intern' trailer< lpar='(' ( not(arglist | argument) any ','> ) rpar=')' > after=any* > c Cs|i}|di}|i|ijo|i}nti|i|ig}|d}|o+g}|D]}||iqv~}nti|ittdtdti|i |di||digg|} |i | _ t dd|| S(Ntobjtafterusysuinterntlpartrpar( tsymstclonettypetarglistRtNodetpowerRRttrailertprefixRtNone( tselftnodetresultsR Rt newarglistRt_[1]tntnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_intern.pyt transforms  + U (t__name__t __module__tPATTERNR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_intern.pyRs N( t__doc__tRRt fixer_utilRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_intern.pytsfixes/fix_zip.pyc000066600000002461150501042300010044 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(s7 Fixer that changes zip(seq0, seq1, ...) into list(zip(seq0, seq1, ...) unless there exists a 'from future_builtins import zip' statement in the top-level namespace. We avoid the transformation if the zip() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. i(t fixer_base(tNametCalltin_special_contexttFixZipcBseZdZdZdZRS(s: power< 'zip' args=trailer< '(' [any] ')' > > sfuture_builtins.zipcCsd|i|odSt|odS|i}d|_ttd|g}|i|_|S(Nuulist(t should_skipRtNonetclonetprefixRR(tselftnodetresultstnew((s-/usr/lib64/python2.6/lib2to3/fixes/fix_zip.pyt transforms    (t__name__t __module__tPATTERNtskip_onR (((s-/usr/lib64/python2.6/lib2to3/fixes/fix_zip.pyRsN( t__doc__tRt fixer_utilRRRtConditionalFixR(((s-/usr/lib64/python2.6/lib2to3/fixes/fix_zip.pytsfixes/fix_intern.py000066600000002522150501042300010374 0ustar00# Copyright 2006 Georg Brandl. # Licensed to PSF under a Contributor Agreement. """Fixer for intern(). intern(s) -> sys.intern(s)""" # Local imports from .. import pytree from .. import fixer_base from ..fixer_util import Name, Attr, touch_import class FixIntern(fixer_base.BaseFix): PATTERN = """ power< 'intern' trailer< lpar='(' ( not(arglist | argument) any ','> ) rpar=')' > after=any* > """ def transform(self, node, results): syms = self.syms obj = results["obj"].clone() if obj.type == syms.arglist: newarglist = obj.clone() else: newarglist = pytree.Node(syms.arglist, [obj.clone()]) after = results["after"] if after: after = [n.clone() for n in after] new = pytree.Node(syms.power, Attr(Name(u"sys"), Name(u"intern")) + [pytree.Node(syms.trailer, [results["lpar"].clone(), newarglist, results["rpar"].clone()])] + after) new.prefix = node.prefix touch_import(None, u'sys', node) return new fixes/fix_except.pyc000066600000005750150501042300010536 0ustar00 Lc@sdZddklZddklZddklZddklZlZl Z l Z l Z l Z dZ deifdYZd S( sFixer for except statements with named exceptions. The following cases will be converted: - "except E, T:" where T is a name: except E as T: - "except E, T:" where T is not a name, tuple or list: except E as t: T = t This is done because the target of an "except" clause must be a name. - "except E, T:" where T is a tuple or list literal: except E as t: T = t.args i(tpytree(ttoken(t fixer_base(tAssigntAttrtNametis_tupletis_listtsymsccsfx_t|D]Q\}}|itijo2|ididjo|||dfVq^q q WdS(Niuexcepti(t enumeratettypeRt except_clausetchildrentvalue(tnodestitn((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyt find_exceptss  t FixExceptcBseZdZdZRS(s1 try_stmt< 'try' ':' (simple_stmt | suite) cleanup=(except_clause ':' (simple_stmt | suite))+ tail=(['except' ':' (simple_stmt | suite)] ['else' ':' (simple_stmt | suite)] ['finally' ':' (simple_stmt | suite)]) > cCsO|i}g}|dD]}||iq~}g}|dD]}||iqC~} xt| D]\} } t| idjot| idd!\} } }| itddd|iti jot|i dd}|i}d|_ |i||i}| i}x2t |D]$\}}t |tioPq/q/Wt|p t|o"t|t|td }nt||}x(t|| D]}| id |qW| i||q|i djo d|_ qqlqlWg}|id D]}||iq~| |}ti|i|S( Nttailtcleanupiiuastprefixu uuargsii(RtcloneRtlenR treplaceRR RtNAMEtnew_nameRR t isinstanceRtNodeRRRRtreversedt insert_child(tselftnodetresultsRt_[1]RRt_[2]tcht try_cleanupR te_suitetEtcommatNtnew_Nttargett suite_stmtsRtstmttassigntchildt_[3]tcR ((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyt transform.s< ++        "6(t__name__t __module__tPATTERNR2(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyR$sN(t__doc__tRtpgen2RRt fixer_utilRRRRRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyts . fixes/fix_itertools_imports.pyo000066600000003427150501042300013062 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(sA Fixer for imports of itertools.(imap|ifilter|izip|ifilterfalse) i(t fixer_base(t BlankLinetsymsttokentFixItertoolsImportscBseZdeZdZRS(sT import_from< 'from' 'itertools' 'import' imports=any > c Cs|d}|itijp |i o |g}n |i}x|dddD]}|itijo|i}|}n|id}|i}|d jod|_|iqR|djo|i d|_qRqRW|ip|g}t } x@|D]8}| o!|iti jo|iq| t N} qW|d iti jo|d in|ipt |d d p|i djo |i} t}| |_|SdS( Ntimportsiiuimapuizipuifilteru ifilterfalseu filterfalseitvalue(uimapuizipuifilter(ttypeRtimport_as_nametchildrenRtNAMERtNonetremovetchangedtTruetCOMMAtgetattrtparenttprefixR( tselftnodetresultsRR tchildtmembert name_nodet member_namet remove_commatp((s;/usr/lib64/python2.6/lib2to3/fixes/fix_itertools_imports.pyt transform s@              (t__name__t __module__tlocalstPATTERNR(((s;/usr/lib64/python2.6/lib2to3/fixes/fix_itertools_imports.pyRs N( t__doc__tlib2to3Rtlib2to3.fixer_utilRRRtBaseFixR(((s;/usr/lib64/python2.6/lib2to3/fixes/fix_itertools_imports.pytsfixes/fix_throw.pyc000066600000003720150501042300010404 0ustar00 Lc@s{dZddklZddklZddklZddklZlZl Z l Z l Z dei fdYZ dS( sFixer for generator.throw(E, V, T). g.throw(E) -> g.throw(E) g.throw(E, V) -> g.throw(E(V)) g.throw(E, V, T) -> g.throw(E(V).with_traceback(T)) g.throw("foo"[, V[, T]]) will warn about string exceptions.i(tpytree(ttoken(t fixer_base(tNametCalltArgListtAttrtis_tupletFixThrowcBseZdZdZRS(s power< any trailer< '.' 'throw' > trailer< '(' args=arglist< exc=any ',' val=any [',' tb=any] > ')' > > | power< any trailer< '.' 'throw' > trailer< '(' exc=any ')' > > c Cs`|i}|di}|itijo|i|ddS|id}|djodS|i}t|o5g}|i dd!D]}||iq~}nd|_ |g}|d} d|jom|di} d| _ t ||} t | t d t| gg} | iti|i| n| it ||dS( Ntexcs+Python 3 does not support string exceptionsuvaliiutargsttbuwith_traceback(tsymstclonettypeRtSTRINGtcannot_converttgettNoneRtchildrentprefixRRRRtreplaceRtNodetpower( tselftnodetresultsR R tvalt_[1]tcR t throw_argsR tetwith_tb((s//usr/lib64/python2.6/lib2to3/fixes/fix_throw.pyt transforms*    5     % (t__name__t __module__tPATTERNR!(((s//usr/lib64/python2.6/lib2to3/fixes/fix_throw.pyRsN(t__doc__tRtpgen2RRt fixer_utilRRRRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_throw.pyts (fixes/fix_funcattrs.pyc000066600000002105150501042300011246 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s3Fix function attribute names (f.func_x -> f.__x__).i(t fixer_base(tNamet FixFuncattrscBseZdZdZRS(s power< any+ trailer< '.' attr=('func_closure' | 'func_doc' | 'func_globals' | 'func_name' | 'func_defaults' | 'func_code' | 'func_dict') > any* > cCs9|dd}|itd|idd|idS(Ntattriu__%s__itprefix(treplaceRtvalueR(tselftnodetresultsR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_funcattrs.pyt transforms(t__name__t __module__tPATTERNR (((s3/usr/lib64/python2.6/lib2to3/fixes/fix_funcattrs.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_funcattrs.pytsfixes/fix_dict.py000066600000007326150501042300010027 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for dict methods. d.keys() -> list(d.keys()) d.items() -> list(d.items()) d.values() -> list(d.values()) d.iterkeys() -> iter(d.keys()) d.iteritems() -> iter(d.items()) d.itervalues() -> iter(d.values()) d.viewkeys() -> d.keys() d.viewitems() -> d.items() d.viewvalues() -> d.values() Except in certain very specific contexts: the iter() can be dropped when the context is list(), sorted(), iter() or for...in; the list() can be dropped when the context is list() or sorted() (but not iter() or for...in!). Special contexts that apply to both: list(), sorted(), tuple() set(), any(), all(), sum(). Note: iter(d.keys()) could be written as iter(d) but since the original d.iterkeys() was also redundant we don't fix this. And there are (rare) contexts where it makes a difference (e.g. when passing it as an argument to a function that introspects the argument). """ # Local imports from .. import pytree from .. import patcomp from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, Call, LParen, RParen, ArgList, Dot from .. import fixer_util iter_exempt = fixer_util.consuming_calls | set(["iter"]) class FixDict(fixer_base.BaseFix): PATTERN = """ power< head=any+ trailer< '.' method=('keys'|'items'|'values'| 'iterkeys'|'iteritems'|'itervalues'| 'viewkeys'|'viewitems'|'viewvalues') > parens=trailer< '(' ')' > tail=any* > """ def transform(self, node, results): head = results["head"] method = results["method"][0] # Extract node for method name tail = results["tail"] syms = self.syms method_name = method.value isiter = method_name.startswith(u"iter") isview = method_name.startswith(u"view") if isiter or isview: method_name = method_name[4:] assert method_name in (u"keys", u"items", u"values"), repr(method) head = [n.clone() for n in head] tail = [n.clone() for n in tail] special = not tail and self.in_special_context(node, isiter) args = head + [pytree.Node(syms.trailer, [Dot(), Name(method_name, prefix=method.prefix)]), results["parens"].clone()] new = pytree.Node(syms.power, args) if not (special or isview): new.prefix = u"" new = Call(Name(u"iter" if isiter else u"list"), [new]) if tail: new = pytree.Node(syms.power, [new] + tail) new.prefix = node.prefix return new P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" p1 = patcomp.compile_pattern(P1) P2 = """for_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > """ p2 = patcomp.compile_pattern(P2) def in_special_context(self, node, isiter): if node.parent is None: return False results = {} if (node.parent.parent is not None and self.p1.match(node.parent.parent, results) and results["node"] is node): if isiter: # iter(d.iterkeys()) -> iter(d.keys()), etc. return results["func"].value in iter_exempt else: # list(d.keys()) -> list(d.keys()), etc. return results["func"].value in fixer_util.consuming_calls if not isiter: return False # for ... in d.iterkeys() -> for ... in d.keys(), etc. return self.p2.match(node.parent, results) and results["node"] is node fixes/fix_paren.py000066600000002262150501042300010203 0ustar00"""Fixer that addes parentheses where they are required This converts ``[x for x in 1, 2]`` to ``[x for x in (1, 2)]``.""" # By Taek Joo Kim and Benjamin Peterson # Local imports from .. import fixer_base from ..fixer_util import LParen, RParen # XXX This doesn't support nested for loops like [x for x in 1, 2 for x in 1, 2] class FixParen(fixer_base.BaseFix): PATTERN = """ atom< ('[' | '(') (listmaker< any comp_for< 'for' NAME 'in' target=testlist_safe< any (',' any)+ [','] > [any] > > | testlist_gexp< any comp_for< 'for' NAME 'in' target=testlist_safe< any (',' any)+ [','] > [any] > >) (']' | ')') > """ def transform(self, node, results): target = results["target"] lparen = LParen() lparen.prefix = target.prefix target.prefix = u"" # Make it hug the parentheses target.insert_child(0, lparen) target.append_child(RParen()) fixes/fix_standarderror.pyc000066600000001500150501042300012105 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s%Fixer for StandardError -> Exception.i(t fixer_base(tNametFixStandarderrorcBseZdZdZRS(s- 'StandardError' cCstdd|iS(Nu Exceptiontprefix(RR(tselftnodetresults((s7/usr/lib64/python2.6/lib2to3/fixes/fix_standarderror.pyt transforms(t__name__t __module__tPATTERNR(((s7/usr/lib64/python2.6/lib2to3/fixes/fix_standarderror.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s7/usr/lib64/python2.6/lib2to3/fixes/fix_standarderror.pytsfixes/fix_standarderror.pyo000066600000001500150501042300012121 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s%Fixer for StandardError -> Exception.i(t fixer_base(tNametFixStandarderrorcBseZdZdZRS(s- 'StandardError' cCstdd|iS(Nu Exceptiontprefix(RR(tselftnodetresults((s7/usr/lib64/python2.6/lib2to3/fixes/fix_standarderror.pyt transforms(t__name__t __module__tPATTERNR(((s7/usr/lib64/python2.6/lib2to3/fixes/fix_standarderror.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s7/usr/lib64/python2.6/lib2to3/fixes/fix_standarderror.pytsfixes/fix_operator.py000066600000002730150501042300010731 0ustar00"""Fixer for operator.{isCallable,sequenceIncludes} operator.isCallable(obj) -> hasattr(obj, '__call__') operator.sequenceIncludes(obj) -> operator.contains(obj) """ # Local imports from .. import fixer_base from ..fixer_util import Call, Name, String class FixOperator(fixer_base.BaseFix): methods = "method=('isCallable'|'sequenceIncludes')" func = "'(' func=any ')'" PATTERN = """ power< module='operator' trailer< '.' %(methods)s > trailer< %(func)s > > | power< %(methods)s trailer< %(func)s > > """ % dict(methods=methods, func=func) def transform(self, node, results): method = results["method"][0] if method.value == u"sequenceIncludes": if "module" not in results: # operator may not be in scope, so we can't make a change. self.warning(node, "You should use operator.contains here.") else: method.value = u"contains" method.changed() elif method.value == u"isCallable": if "module" not in results: self.warning(node, "You should use hasattr(%s, '__call__') here." % results["func"].value) else: func = results["func"] args = [func.clone(), String(u", "), String(u"'__call__'")] return Call(Name(u"hasattr"), args, prefix=node.prefix) fixes/fix_raw_input.py000066600000000656150501042300011113 0ustar00"""Fixer that changes raw_input(...) into input(...).""" # Author: Andre Roberge # Local imports from .. import fixer_base from ..fixer_util import Name class FixRawInput(fixer_base.BaseFix): PATTERN = """ power< name='raw_input' trailer< '(' [any] ')' > any* > """ def transform(self, node, results): name = results["name"] name.replace(Name(u"input", prefix=name.prefix)) fixes/fix_reduce.py000066600000001433150501042300010344 0ustar00# Copyright 2008 Armin Ronacher. # Licensed to PSF under a Contributor Agreement. """Fixer for reduce(). Makes sure reduce() is imported from the functools module if reduce is used in that module. """ from lib2to3 import fixer_base from lib2to3.fixer_util import touch_import class FixReduce(fixer_base.BaseFix): PATTERN = """ power< 'reduce' trailer< '(' arglist< ( (not(argument) any ',' not(argument > """ def transform(self, node, results): touch_import(u'functools', u'reduce', node) fixes/fix_unicode.py000066600000001270150501042300010522 0ustar00"""Fixer that changes unicode to str, unichr to chr, and u"..." into "...". """ import re from ..pgen2 import token from .. import fixer_base _mapping = {u"unichr" : u"chr", u"unicode" : u"str"} _literal_re = re.compile(ur"[uU][rR]?[\'\"]") class FixUnicode(fixer_base.BaseFix): PATTERN = "STRING | 'unicode' | 'unichr'" def transform(self, node, results): if node.type == token.NAME: new = node.clone() new.value = _mapping[node.value] return new elif node.type == token.STRING: if _literal_re.match(node.value): new = node.clone() new.value = new.value[1:] return new fixes/fix_buffer.pyo000066600000001652150501042300010530 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s4Fixer that changes buffer(...) into memoryview(...).i(t fixer_base(tNamet FixBuffercBseZeZdZdZRS(sR power< name='buffer' trailer< '(' [any] ')' > any* > cCs*|d}|itdd|idS(Ntnameu memoryviewtprefix(treplaceRR(tselftnodetresultsR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_buffer.pyt transforms (t__name__t __module__tTruetexplicittPATTERNR (((s0/usr/lib64/python2.6/lib2to3/fixes/fix_buffer.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_buffer.pytsfixes/fix_itertools.pyo000066600000003245150501042300011303 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sT Fixer for itertools.(imap|ifilter|izip) --> (map|filter|zip) and itertools.ifilterfalse --> itertools.filterfalse (bugs 2360-2363) imports from itertools are fixed in fix_itertools_import.py If itertools is imported as something else (ie: import itertools as it; it.izip(spam, eggs)) method calls will not get fixed. i(t fixer_base(tNamet FixItertoolscBs*eZdZdeZdZdZRS(s(('imap'|'ifilter'|'izip'|'ifilterfalse')s power< it='itertools' trailer< dot='.' func=%(it_funcs)s > trailer< '(' [any] ')' > > | power< func=%(it_funcs)s trailer< '(' [any] ')' > > icCsd}|dd}d|joV|idjoF|d|d}}|i}|i|i|ii|n|p|i}|it|idd|dS(Ntfuncititu ifilterfalsetdotitprefix(tNonetvalueRtremovetparenttreplaceR(tselftnodetresultsRRRR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_itertools.pyt transforms   (t__name__t __module__tit_funcstlocalstPATTERNt run_orderR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_itertools.pyRs N(t__doc__tRt fixer_utilRtBaseFixR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_itertools.pytsfixes/fix_getcwdu.pyo000066600000001611150501042300010714 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s1 Fixer that changes os.getcwdu() to os.getcwd(). i(t fixer_base(tNamet FixGetcwducBseZdZdZRS(sR power< 'os' trailer< dot='.' name='getcwdu' > any* > cCs*|d}|itdd|idS(Ntnameugetcwdtprefix(treplaceRR(tselftnodetresultsR((s1/usr/lib64/python2.6/lib2to3/fixes/fix_getcwdu.pyt transforms (t__name__t __module__tPATTERNR (((s1/usr/lib64/python2.6/lib2to3/fixes/fix_getcwdu.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_getcwdu.pytsfixes/fix_apply.pyc000066600000003502150501042300010364 0ustar00 Lc@sodZddklZddklZddklZddklZlZl Z dei fdYZ dS( sIFixer for apply(). This converts apply(func, v, k) into (func)(*v, **k).i(tpytree(ttoken(t fixer_base(tCalltCommat parenthesizetFixApplycBseZdZdZRS(s. power< 'apply' trailer< '(' arglist< (not argument ')' > > c Cs`|i}|pt|d}|d}|id}|i}|i}|iti|ifjo=|i|i jp|i diti jot |}nd|_|i}d|_|dj o|i}d|_ntitid|g}|dj o9|ittiti d|gd|d_nt||d |S( Ntfunctargstkwdsitu*u**u tprefix(tsymstAssertionErrortgetR tclonettypeRtNAMEtatomtpowertchildrent DOUBLESTARRtNoneRtLeaftSTARtextendRR( tselftnodetresultsR RRR R t l_newargs((s//usr/lib64/python2.6/lib2to3/fixes/fix_apply.pyt transforms0              (t__name__t __module__tPATTERNR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_apply.pyRsN( t__doc__R Rtpgen2RRt fixer_utilRRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_apply.pyts fixes/fix_imports.pyc000066600000012351150501042300010736 0ustar00 Lc@sdZddklZddklZlZh0dd6dd6dd6d d 6d d 6d d6dd6dd6dd6dd6dd6dd6dd6dd6dd6dd 6d!d"6d#d$6d%d&6d'd(6d)d*6d+d,6d-d.6d/d06d1d26d3d46d5d66d7d86d9d:6d;d<6d=d>6d?d@6dAdB6dCdD6dCdE6dFdG6dHdI6dJdK6dLdM6dNdO6dPdQ6dPdR6dPdS6dTdU6dVdW6dVdX6dYdZ6d[d\6Zd]Zed^Zd_ei fd`YZ daS(bs/Fix incompatible imports and module references.i(t fixer_base(tNamet attr_chaintiotStringIOt cStringIOtpickletcPickletbuiltinst __builtin__tcopyregtcopy_regtqueuetQueuet socketservert SocketServert configparsert ConfigParsertreprlibtreprstkinter.filedialogt FileDialogt tkFileDialogstkinter.simpledialogt SimpleDialogttkSimpleDialogstkinter.colorchooserttkColorChooserstkinter.commondialogttkCommonDialogstkinter.dialogtDialogs tkinter.dndtTkdnds tkinter.fontttkFontstkinter.messageboxt tkMessageBoxstkinter.scrolledtextt ScrolledTextstkinter.constantst Tkconstantss tkinter.tixtTixs tkinter.ttktttkttkintertTkintert _markupbaset markupbasetwinregt_winregt_threadtthreadt _dummy_threadt dummy_threadsdbm.bsdtdbhashsdbm.dumbtdumbdbmsdbm.ndbmtdbmsdbm.gnutgdbms xmlrpc.clientt xmlrpclibs xmlrpc.servertDocXMLRPCServertSimpleXMLRPCServers http.clientthttplibs html.entitiesthtmlentitydefss html.parsert HTMLParsers http.cookiestCookieshttp.cookiejart cookielibs http.servertBaseHTTPServertSimpleHTTPServert CGIHTTPServert subprocesstcommandst collectionst UserStringtUserLists urllib.parseturlparsesurllib.robotparsert robotparsercCsdditt|dS(Nt(t|t)(tjointmapR(tmembers((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyt alternates=sccstdig}|D]}|d|q~}t|i}d||fVd|Vd||fVd|VdS(Ns | smodule_name='%s'syname_import=import_name< 'import' ((%s) | multiple_imports=dotted_as_names< any* (%s) any* >) > simport_from< 'from' (%s) 'import' ['('] ( any | import_as_name< any 'as' any > | import_as_names< any* >) [')'] > simport_name< 'import' (dotted_as_name< (%s) 'as' any > | multiple_imports=dotted_as_names< any* dotted_as_name< (%s) 'as' any > any* >) > s3power< bare_with_attr=(%s) trailer<'.' any > any* >(RERHtkeys(tmappingt_[1]tkeytmod_listt bare_names((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyt build_patternAs . t FixImportscBsAeZeZdZdZdZdZdZdZ RS(icCsdit|iS(NRC(RERORJ(tself((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyRO^scCs&|i|_tt|idS(N(ROtPATTERNtsuperRPtcompile_pattern(RQ((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyRTascsftt|i|}|o=d|jo+tfdt|dDotS|StS(Ntbare_with_attrc3sx|]}|VqWdS(N((t.0tobj(tmatch(s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pys os tparent(RSRPRXtanyRtFalse(RQtnodetresults((RXs1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyRXhs  &cCs&tt|i||h|_dS(N(RSRPt start_treetreplace(RQttreetfilename((s1/usr/lib64/python2.6/lib2to3/fixes/fix_imports.pyR^tscCs|id}|o|i}t|i|}|it|d|id|jo||i|sj    fixes/fix_import.py000066600000006242150501042300010412 0ustar00"""Fixer for import statements. If spam is being imported from the local directory, this import: from spam import eggs Becomes: from .spam import eggs And this import: import spam Becomes: from . import spam """ # Local imports from .. import fixer_base from os.path import dirname, join, exists, sep from ..fixer_util import FromImport, syms, token def traverse_imports(names): """ Walks over all the names imported in a dotted_as_names node. """ pending = [names] while pending: node = pending.pop() if node.type == token.NAME: yield node.value elif node.type == syms.dotted_name: yield "".join([ch.value for ch in node.children]) elif node.type == syms.dotted_as_name: pending.append(node.children[0]) elif node.type == syms.dotted_as_names: pending.extend(node.children[::-2]) else: raise AssertionError("unkown node type") class FixImport(fixer_base.BaseFix): PATTERN = """ import_from< 'from' imp=any 'import' ['('] any [')'] > | import_name< 'import' imp=any > """ def start_tree(self, tree, name): super(FixImport, self).start_tree(tree, name) self.skip = "absolute_import" in tree.future_features def transform(self, node, results): if self.skip: return imp = results['imp'] if node.type == syms.import_from: # Some imps are top-level (eg: 'import ham') # some are first level (eg: 'import ham.eggs') # some are third level (eg: 'import ham.eggs as spam') # Hence, the loop while not hasattr(imp, 'value'): imp = imp.children[0] if self.probably_a_local_import(imp.value): imp.value = u"." + imp.value imp.changed() else: have_local = False have_absolute = False for mod_name in traverse_imports(imp): if self.probably_a_local_import(mod_name): have_local = True else: have_absolute = True if have_absolute: if have_local: # We won't handle both sibling and absolute imports in the # same statement at the moment. self.warning(node, "absolute and local imports together") return new = FromImport(u".", [imp]) new.prefix = node.prefix return new def probably_a_local_import(self, imp_name): if imp_name.startswith(u"."): # Relative imports are certainly not local imports. return False imp_name = imp_name.split(u".", 1)[0] base_path = dirname(self.filename) base_path = join(base_path, imp_name) # If there is no __init__.py next to the file its not in a package # so can't be a relative import. if not exists(join(dirname(base_path), "__init__.py")): return False for ext in [".py", sep, ".pyc", ".so", ".sl", ".pyd"]: if exists(base_path + ext): return True return False fixes/fix_intern.pyc000066600000003053150501042300010537 0ustar00 Lc@s_dZddklZddklZddklZlZlZdeifdYZ dS(s/Fixer for intern(). intern(s) -> sys.intern(s)i(tpytree(t fixer_base(tNametAttrt touch_importt FixInterncBseZdZdZRS(s power< 'intern' trailer< lpar='(' ( not(arglist | argument) any ','> ) rpar=')' > after=any* > c Cs|i}|di}|i|ijo|i}nti|i|ig}|d}|o+g}|D]}||iqv~}nti|ittdtdti|i |di||digg|} |i | _ t dd|| S(Ntobjtafterusysuinterntlpartrpar( tsymstclonettypetarglistRtNodetpowerRRttrailertprefixRtNone( tselftnodetresultsR Rt newarglistRt_[1]tntnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_intern.pyt transforms  + U (t__name__t __module__tPATTERNR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_intern.pyRs N( t__doc__tRRt fixer_utilRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_intern.pytsfixes/fix_getcwdu.pyc000066600000001611150501042300010700 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s1 Fixer that changes os.getcwdu() to os.getcwd(). i(t fixer_base(tNamet FixGetcwducBseZdZdZRS(sR power< 'os' trailer< dot='.' name='getcwdu' > any* > cCs*|d}|itdd|idS(Ntnameugetcwdtprefix(treplaceRR(tselftnodetresultsR((s1/usr/lib64/python2.6/lib2to3/fixes/fix_getcwdu.pyt transforms (t__name__t __module__tPATTERNR (((s1/usr/lib64/python2.6/lib2to3/fixes/fix_getcwdu.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_getcwdu.pytsfixes/fix_reduce.pyo000066600000002277150501042300010532 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sqFixer for reduce(). Makes sure reduce() is imported from the functools module if reduce is used in that module. i(t fixer_base(t touch_importt FixReducecBseZdZdZRS(si power< 'reduce' trailer< '(' arglist< ( (not(argument) any ',' not(argument > cCstdd|dS(Nu functoolsureduce(R(tselftnodetresults((s0/usr/lib64/python2.6/lib2to3/fixes/fix_reduce.pyt transforms(t__name__t __module__tPATTERNR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_reduce.pyRsN(t__doc__tlib2to3Rtlib2to3.fixer_utilRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_reduce.pytsfixes/fix_numliterals.pyo000066600000002370150501042300011614 0ustar00 Lc@sSdZddklZddklZddklZdeifdYZdS(s-Fixer that turns 1L into 1, 0755 into 0o755. i(ttoken(t fixer_base(tNumbertFixNumliteralscBs#eZeiZdZdZRS(cCs$|iidp|iddjS(Nu0iuLl(tvaluet startswith(tselftnode((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pytmatchscCs|i}|ddjo|d }nI|ido8|io+tt|djod|d}nt|d|iS(NiuLlu0iu0otprefix(RRtisdigittlentsetRR (RRtresultstval((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pyt transforms  6(t__name__t __module__RtNUMBERt _accept_typeRR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pyR s  N( t__doc__tpgen2RtRt fixer_utilRtBaseFixR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_numliterals.pytsfixes/fix_basestring.py000066600000000450150501042300011234 0ustar00"""Fixer for basestring -> str.""" # Author: Christian Heimes # Local imports from .. import fixer_base from ..fixer_util import Name class FixBasestring(fixer_base.BaseFix): PATTERN = "'basestring'" def transform(self, node, results): return Name(u"str", prefix=node.prefix) fixes/fix_idioms.pyo000066600000010546150501042300010545 0ustar00 Lc@smdZddklZddklZlZlZlZlZl Z dZ dZ dei fdYZ dS( sAdjust some old Python 2 idioms to their modern counterparts. * Change some type comparisons to isinstance() calls: type(x) == T -> isinstance(x, T) type(x) is T -> isinstance(x, T) type(x) != T -> not isinstance(x, T) type(x) is not T -> not isinstance(x, T) * Change "while 1:" into "while True:". * Change both v = list(EXPR) v.sort() foo(v) and the more general v = EXPR v.sort() foo(v) into v = sorted(EXPR) foo(v) i(t fixer_base(tCalltCommatNametNodet BlankLinetsymss0(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)s(power< 'type' trailer< '(' x=any ')' > >t FixIdiomscBsQeZeZdeeeefZdZdZdZ dZ dZ RS(s isinstance=comparison< %s %s T=any > | isinstance=comparison< T=any %s %s > | while_stmt< 'while' while='1' ':' any+ > | sorted=any< any* simple_stmt< expr_stmt< id1=any '=' power< list='list' trailer< '(' (not arglist) any ')' > > > '\n' > sort= simple_stmt< power< id2=any trailer< '.' 'sort' > trailer< '(' ')' > > '\n' > next=any* > | sorted=any< any* simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' > sort= simple_stmt< power< id2=any trailer< '.' 'sort' > trailer< '(' ')' > > '\n' > next=any* > cCsOtt|i|}|o,d|jo|d|djo|SdS|S(Ntsortedtid1tid2(tsuperRtmatchtNone(tselftnodetr((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyR Ps cCsjd|jo|i||Sd|jo|i||Sd|jo|i||StddS(Nt isinstancetwhileRs Invalid match(ttransform_isinstancettransform_whilettransform_sortt RuntimeError(RRtresults((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyt transform[s   cCs|di}|di}d|_d|_ttd|t|g}d|jo+d|_ttitd|g}n|i|_|S(NtxtTuu u isinstancetnunot(tclonetprefixRRRRRtnot_test(RRRRRttest((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyRes  !  " cCs*|d}|itdd|idS(NRuTrueR(treplaceRR(RRRtone((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyRqs c CsE|d}|d}|id}|id}|o |itdd|inU|oA|i}d|_|ittd|gd|in td|i|i}d |jot|o:|id d |d if} d i | |d _qAt } |i i | |id d | _ndS( NtsorttnexttlisttexprusortedRusshould not have reached hereu i( tgetR RRRRRtremovet rpartitiontjoinRtparentt append_child( RRRt sort_stmtt next_stmtt list_callt simple_exprtnewtbtwnt prefix_linestend_line((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyRus*           ( t__name__t __module__tTruetexplicittTYPEtCMPtPATTERNR RRRR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyR%s' N(t__doc__tRt fixer_utilRRRRRRR9R8tBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_idioms.pyts .fixes/fix_nonzero.pyc000066600000002051150501042300010727 0ustar00 Lc@sIdZddklZddklZlZdeifdYZdS(s*Fixer for __nonzero__ -> __bool__ methods.i(t fixer_base(tNametsymst FixNonzerocBseZdZdZRS(s classdef< 'class' any+ ':' suite< any* funcdef< 'def' name='__nonzero__' parameters< '(' NAME ')' > any+ > any* > > cCs0|d}tdd|i}|i|dS(Ntnameu__bool__tprefix(RRtreplace(tselftnodetresultsRtnew((s1/usr/lib64/python2.6/lib2to3/fixes/fix_nonzero.pyt transforms (t__name__t __module__tPATTERNR (((s1/usr/lib64/python2.6/lib2to3/fixes/fix_nonzero.pyRsN(t__doc__tRt fixer_utilRRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_nonzero.pytsfixes/fix_xrange.pyo000066600000005774150501042300010554 0ustar00 Lc@s_dZddklZddklZlZlZddklZdeifdYZ dS(s/Fixer that changes xrange(...) into range(...).i(t fixer_base(tNametCalltconsuming_calls(tpatcompt FixXrangecBsneZdZdZdZdZdZdZdZe i eZ dZ e i e Z dZRS( s power< (name='range'|name='xrange') trailer< '(' args=any ')' > rest=any* > cCs)tt|i||t|_dS(N(tsuperRt start_treetsetttransformed_xranges(tselfttreetfilename((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyRscCs d|_dS(N(tNoneR (R R R ((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyt finish_treescCsb|d}|idjo|i||S|idjo|i||Stt|dS(Ntnameuxrangeurange(tvaluettransform_xrangettransform_ranget ValueErrortrepr(R tnodetresultsR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyt transforms  cCs@|d}|itdd|i|iit|dS(NRurangetprefix(treplaceRRR taddtid(R RRR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR$s cCst||ijo{|i| ojttd|dig}ttd|gd|i}x|dD]}|i|quW|SdS(NurangetargsulistRtrest(RR tin_special_contextRRtcloneRt append_child(R RRt range_callt list_calltn((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR*s"  s3power< func=NAME trailer< '(' node=any ')' > any* >sfor_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > | comparison< any 'in' node=any any*> cCs|idjotSh}|iidj o?|ii|ii|o#|d|jo|ditjS|ii|i|o|d|jS(NRtfunc(tparentR tFalsetp1tmatchRRtp2(R RR((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR?s(t__name__t __module__tPATTERNRRRRRtP1Rtcompile_patternR'tP2R)R(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pyR s    N( t__doc__tRt fixer_utilRRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_xrange.pytsfixes/fix_print.pyc000066600000005304150501042300010375 0ustar00 Lc@sdZddklZddklZddklZddklZddklZl Z l Z l Z l Z ei dZdeifd YZd S( s Fixer for print. Change: 'print' into 'print()' 'print ...' into 'print(...)' 'print ... ,' into 'print(..., end=" ")' 'print >>x, ...' into 'print(..., file=x)' No changes are applied if print_function is imported from __future__ i(tpatcomp(tpytree(ttoken(t fixer_base(tNametCalltCommatStringtis_tuples"atom< '(' [atom|STRING|NAME] ')' >tFixPrintcBs eZdZdZdZRS(sP simple_stmt< any* bare='print' any* > | print_stmt c CsW|pt|id}|o*|ittdgd|idS|idtdjpt|id}t|djoti |dodSd}}}|o(|dt jo|d }d}n|oX|dt i tidjo8t|d jpt|di}|d }ng}|D]} || iqO~} | od | d_n|dj p|dj p |dj o|dj o#|i| d tt|n|dj o#|i| d tt|n|dj o|i| d|q2nttd| } |i| _| S(Ntbareuprinttprefixiiit u>>iiuusepuendufile(tAssertionErrortgettreplaceRRR tchildrentlent parend_exprtmatchtNoneRRtLeafRt RIGHTSHIFTtclonet add_kwargRtrepr( tselftnodetresultst bare_printtargstseptendtfilet_[1]targtl_argstn_stmt((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyt transform#s> ! '  ''' # #  cCstd|_ti|iit|titid|f}|o|i t d|_n|i |dS(Nuu=u ( R RtNodetsymstargumentRRRtEQUALtappendR(Rtl_nodests_kwdtn_exprt n_argument((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyRKs    (t__name__t __module__tPATTERNR&R(((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyR s (N(t__doc__tRRtpgen2RRt fixer_utilRRRRRtcompile_patternRtBaseFixR (((s//usr/lib64/python2.6/lib2to3/fixes/fix_print.pyts( fixes/fix_imports2.pyo000066600000001172150501042300011033 0ustar00 Lc@sGdZddklZhdd6dd6ZdeifdYZdS( sTFix incompatible imports and module references that must be fixed after fix_imports.i(t fix_importstdbmtwhichdbtanydbmt FixImports2cBseZdZeZRS(i(t__name__t __module__t run_ordertMAPPINGtmapping(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_imports2.pyR sN(t__doc__tRRt FixImportsR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_imports2.pyts  fixes/fix_zip.pyo000066600000002461150501042300010060 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(s7 Fixer that changes zip(seq0, seq1, ...) into list(zip(seq0, seq1, ...) unless there exists a 'from future_builtins import zip' statement in the top-level namespace. We avoid the transformation if the zip() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. i(t fixer_base(tNametCalltin_special_contexttFixZipcBseZdZdZdZRS(s: power< 'zip' args=trailer< '(' [any] ')' > > sfuture_builtins.zipcCsd|i|odSt|odS|i}d|_ttd|g}|i|_|S(Nuulist(t should_skipRtNonetclonetprefixRR(tselftnodetresultstnew((s-/usr/lib64/python2.6/lib2to3/fixes/fix_zip.pyt transforms    (t__name__t __module__tPATTERNtskip_onR (((s-/usr/lib64/python2.6/lib2to3/fixes/fix_zip.pyRsN( t__doc__tRt fixer_utilRRRtConditionalFixR(((s-/usr/lib64/python2.6/lib2to3/fixes/fix_zip.pytsfixes/fix_xrange.py000066600000005163150501042300010365 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that changes xrange(...) into range(...).""" # Local imports from .. import fixer_base from ..fixer_util import Name, Call, consuming_calls from .. import patcomp class FixXrange(fixer_base.BaseFix): PATTERN = """ power< (name='range'|name='xrange') trailer< '(' args=any ')' > rest=any* > """ def start_tree(self, tree, filename): super(FixXrange, self).start_tree(tree, filename) self.transformed_xranges = set() def finish_tree(self, tree, filename): self.transformed_xranges = None def transform(self, node, results): name = results["name"] if name.value == u"xrange": return self.transform_xrange(node, results) elif name.value == u"range": return self.transform_range(node, results) else: raise ValueError(repr(name)) def transform_xrange(self, node, results): name = results["name"] name.replace(Name(u"range", prefix=name.prefix)) # This prevents the new range call from being wrapped in a list later. self.transformed_xranges.add(id(node)) def transform_range(self, node, results): if (id(node) not in self.transformed_xranges and not self.in_special_context(node)): range_call = Call(Name(u"range"), [results["args"].clone()]) # Encase the range call in list(). list_call = Call(Name(u"list"), [range_call], prefix=node.prefix) # Put things that were after the range() call after the list call. for n in results["rest"]: list_call.append_child(n) return list_call P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" p1 = patcomp.compile_pattern(P1) P2 = """for_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > | comparison< any 'in' node=any any*> """ p2 = patcomp.compile_pattern(P2) def in_special_context(self, node): if node.parent is None: return False results = {} if (node.parent.parent is not None and self.p1.match(node.parent.parent, results) and results["node"] is node): # list(d.keys()) -> list(d.keys()), etc. return results["func"].value in consuming_calls # for ... in d.iterkeys() -> for ... in d.keys(), etc. return self.p2.match(node.parent, results) and results["node"] is node fixes/fix_reduce.pyc000066600000002277150501042300010516 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sqFixer for reduce(). Makes sure reduce() is imported from the functools module if reduce is used in that module. i(t fixer_base(t touch_importt FixReducecBseZdZdZRS(si power< 'reduce' trailer< '(' arglist< ( (not(argument) any ',' not(argument > cCstdd|dS(Nu functoolsureduce(R(tselftnodetresults((s0/usr/lib64/python2.6/lib2to3/fixes/fix_reduce.pyt transforms(t__name__t __module__tPATTERNR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_reduce.pyRsN(t__doc__tlib2to3Rtlib2to3.fixer_utilRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_reduce.pytsfixes/fix_ne.pyo000066600000001751150501042300007661 0ustar00 Lc@sSdZddklZddklZddklZdeifdYZdS(sFixer that turns <> into !=.i(tpytree(ttoken(t fixer_basetFixNecBs#eZeiZdZdZRS(cCs |idjS(Nu<>(tvalue(tselftnode((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pytmatchscCs"titidd|i}|S(Nu!=tprefix(RtLeafRtNOTEQUALR(RRtresultstnew((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pyt transforms(t__name__t __module__RR t _accept_typeRR (((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pyR s  N(t__doc__tRtpgen2RRtBaseFixR(((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pytsfixes/__init__.py000066600000000057150501042300007767 0ustar00# Dummy file to make this directory a package. fixes/fix_xreadlines.py000066600000001231150501042300011227 0ustar00"""Fix "for x in f.xreadlines()" -> "for x in f". This fixer will also convert g(f.xreadlines) into g(f.__iter__).""" # Author: Collin Winter # Local imports from .. import fixer_base from ..fixer_util import Name class FixXreadlines(fixer_base.BaseFix): PATTERN = """ power< call=any+ trailer< '.' 'xreadlines' > trailer< '(' ')' > > | power< any+ trailer< '.' no_call='xreadlines' > > """ def transform(self, node, results): no_call = results.get("no_call") if no_call: no_call.replace(Name(u"__iter__", prefix=no_call.prefix)) else: node.replace([x.clone() for x in results["call"]]) fixes/fix_next.pyc000066600000007013150501042300010216 0ustar00 Lc@sdZddklZddklZddklZddkl Z l Z l Z dZ dei fdYZd Zd Zd Zd S( s.Fixer for it.next() -> next(it), per PEP 3114.i(ttoken(tpython_symbols(t fixer_base(tNametCallt find_bindings;Calls to builtin next() possibly shadowed by global bindingtFixNextcBs&eZdZdZdZdZRS(s power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > > | power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > > | classdef< 'class' any+ ':' suite< any* funcdef< 'def' name='next' parameters< '(' NAME ')' > any+ > any* > > | global=global_stmt< 'global' any* 'next' any* > tprecCsYtt|i||td|}|o|i|tt|_n t|_dS(Nunext( tsuperRt start_treeRtwarningt bind_warningtTruet shadowed_nexttFalse(tselfttreetfilenametn((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyR #s  c Cs|pt|id}|id}|id}|o|io |itdd|iqg}|D]}||iqw~}d|d_|ittdd|i|n|o&tdd|i}|i|n|ot|o`|d }d i g} |D]}| t |q%~ i d jo|i |t ndS|itdn+d |jo|i |t t|_ndS( Ntbasetattrtnameu__next__tprefixuiunexttheadtu __builtin__tglobal(tAssertionErrortgetR treplaceRRtcloneRtis_assign_targettjointstrtstripR R R ( RtnodetresultsRRRt_[1]RRt_[2]((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyt transform-s.  ' )  = (t__name__t __module__tPATTERNtorderR R&(((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyRs cCsct|}|djotSx>|iD]3}|itijotSt||otSq(WtS(N( t find_assigntNoneRtchildrenttypeRtEQUALt is_subtreeR (R"tassigntchild((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyRPs    cCsM|itijo|S|itijp|idjodSt|iS(N(R.tsymst expr_stmtt simple_stmttparentR,R+(R"((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyR+\s #cs/|jotStfd|iDS(Nc3s"x|]}t|VqWdS(N(R0(t.0tc(R"(s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pys fs (R tanyR-(trootR"((R"s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyR0cs N(t__doc__tpgen2RtpygramRR3RRt fixer_utilRRRR tBaseFixRRR+R0(((s./usr/lib64/python2.6/lib2to3/fixes/fix_next.pyts? fixes/fix_repr.pyc000066600000001745150501042300010216 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(s/Fixer that transforms `xyzzy` into repr(xyzzy).i(t fixer_base(tCalltNamet parenthesizetFixReprcBseZdZdZRS(s7 atom < '`' expr=any '`' > cCsU|di}|i|iijot|}nttd|gd|iS(Ntexprureprtprefix(tclonettypetsymst testlist1RRRR(tselftnodetresultsR((s./usr/lib64/python2.6/lib2to3/fixes/fix_repr.pyt transforms(t__name__t __module__tPATTERNR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_repr.pyR sN( t__doc__tRt fixer_utilRRRtBaseFixR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_repr.pytsfixes/fix_itertools.pyc000066600000003245150501042300011267 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sT Fixer for itertools.(imap|ifilter|izip) --> (map|filter|zip) and itertools.ifilterfalse --> itertools.filterfalse (bugs 2360-2363) imports from itertools are fixed in fix_itertools_import.py If itertools is imported as something else (ie: import itertools as it; it.izip(spam, eggs)) method calls will not get fixed. i(t fixer_base(tNamet FixItertoolscBs*eZdZdeZdZdZRS(s(('imap'|'ifilter'|'izip'|'ifilterfalse')s power< it='itertools' trailer< dot='.' func=%(it_funcs)s > trailer< '(' [any] ')' > > | power< func=%(it_funcs)s trailer< '(' [any] ')' > > icCsd}|dd}d|joV|idjoF|d|d}}|i}|i|i|ii|n|p|i}|it|idd|dS(Ntfuncititu ifilterfalsetdotitprefix(tNonetvalueRtremovetparenttreplaceR(tselftnodetresultsRRRR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_itertools.pyt transforms   (t__name__t __module__tit_funcstlocalstPATTERNt run_orderR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_itertools.pyRs N(t__doc__tRt fixer_utilRtBaseFixR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_itertools.pytsfixes/fix_imports.py000066600000013011150501042300010565 0ustar00"""Fix incompatible imports and module references.""" # Authors: Collin Winter, Nick Edds # Local imports from .. import fixer_base from ..fixer_util import Name, attr_chain MAPPING = {'StringIO': 'io', 'cStringIO': 'io', 'cPickle': 'pickle', '__builtin__' : 'builtins', 'copy_reg': 'copyreg', 'Queue': 'queue', 'SocketServer': 'socketserver', 'ConfigParser': 'configparser', 'repr': 'reprlib', 'FileDialog': 'tkinter.filedialog', 'tkFileDialog': 'tkinter.filedialog', 'SimpleDialog': 'tkinter.simpledialog', 'tkSimpleDialog': 'tkinter.simpledialog', 'tkColorChooser': 'tkinter.colorchooser', 'tkCommonDialog': 'tkinter.commondialog', 'Dialog': 'tkinter.dialog', 'Tkdnd': 'tkinter.dnd', 'tkFont': 'tkinter.font', 'tkMessageBox': 'tkinter.messagebox', 'ScrolledText': 'tkinter.scrolledtext', 'Tkconstants': 'tkinter.constants', 'Tix': 'tkinter.tix', 'ttk': 'tkinter.ttk', 'Tkinter': 'tkinter', 'markupbase': '_markupbase', '_winreg': 'winreg', 'thread': '_thread', 'dummy_thread': '_dummy_thread', # anydbm and whichdb are handled by fix_imports2 'dbhash': 'dbm.bsd', 'dumbdbm': 'dbm.dumb', 'dbm': 'dbm.ndbm', 'gdbm': 'dbm.gnu', 'xmlrpclib': 'xmlrpc.client', 'DocXMLRPCServer': 'xmlrpc.server', 'SimpleXMLRPCServer': 'xmlrpc.server', 'httplib': 'http.client', 'htmlentitydefs' : 'html.entities', 'HTMLParser' : 'html.parser', 'Cookie': 'http.cookies', 'cookielib': 'http.cookiejar', 'BaseHTTPServer': 'http.server', 'SimpleHTTPServer': 'http.server', 'CGIHTTPServer': 'http.server', #'test.test_support': 'test.support', 'commands': 'subprocess', 'UserString' : 'collections', 'UserList' : 'collections', 'urlparse' : 'urllib.parse', 'robotparser' : 'urllib.robotparser', } def alternates(members): return "(" + "|".join(map(repr, members)) + ")" def build_pattern(mapping=MAPPING): mod_list = ' | '.join(["module_name='%s'" % key for key in mapping]) bare_names = alternates(mapping.keys()) yield """name_import=import_name< 'import' ((%s) | multiple_imports=dotted_as_names< any* (%s) any* >) > """ % (mod_list, mod_list) yield """import_from< 'from' (%s) 'import' ['('] ( any | import_as_name< any 'as' any > | import_as_names< any* >) [')'] > """ % mod_list yield """import_name< 'import' (dotted_as_name< (%s) 'as' any > | multiple_imports=dotted_as_names< any* dotted_as_name< (%s) 'as' any > any* >) > """ % (mod_list, mod_list) # Find usages of module members in code e.g. thread.foo(bar) yield "power< bare_with_attr=(%s) trailer<'.' any > any* >" % bare_names class FixImports(fixer_base.BaseFix): # This is overridden in fix_imports2. mapping = MAPPING # We want to run this fixer late, so fix_import doesn't try to make stdlib # renames into relative imports. run_order = 6 def build_pattern(self): return "|".join(build_pattern(self.mapping)) def compile_pattern(self): # We override this, so MAPPING can be pragmatically altered and the # changes will be reflected in PATTERN. self.PATTERN = self.build_pattern() super(FixImports, self).compile_pattern() # Don't match the node if it's within another match. def match(self, node): match = super(FixImports, self).match results = match(node) if results: # Module usage could be in the trailer of an attribute lookup, so we # might have nested matches when "bare_with_attr" is present. if "bare_with_attr" not in results and \ any(match(obj) for obj in attr_chain(node, "parent")): return False return results return False def start_tree(self, tree, filename): super(FixImports, self).start_tree(tree, filename) self.replace = {} def transform(self, node, results): import_mod = results.get("module_name") if import_mod: mod_name = import_mod.value new_name = unicode(self.mapping[mod_name]) import_mod.replace(Name(new_name, prefix=import_mod.prefix)) if "name_import" in results: # If it's not a "from x import x, y" or "import x as y" import, # marked its usage to be replaced. self.replace[mod_name] = new_name if "multiple_imports" in results: # This is a nasty hack to fix multiple imports on a line (e.g., # "import StringIO, urlparse"). The problem is that I can't # figure out an easy way to make a pattern recognize the keys of # MAPPING randomly sprinkled in an import statement. results = self.match(node) if results: self.transform(node, results) else: # Replace usage of the module. bare_name = results["bare_with_attr"][0] new_name = self.replace.get(bare_name.value) if new_name: bare_name.replace(Name(new_name, prefix=bare_name.prefix)) fixes/fix_methodattrs.py000066600000001116150501042300011431 0ustar00"""Fix bound method attributes (method.im_? -> method.__?__). """ # Author: Christian Heimes # Local imports from .. import fixer_base from ..fixer_util import Name MAP = { "im_func" : "__func__", "im_self" : "__self__", "im_class" : "__self__.__class__" } class FixMethodattrs(fixer_base.BaseFix): PATTERN = """ power< any+ trailer< '.' attr=('im_func' | 'im_self' | 'im_class') > any* > """ def transform(self, node, results): attr = results["attr"][0] new = unicode(MAP[attr.value]) attr.replace(Name(new, prefix=attr.prefix)) fixes/fix_sys_exc.py000066600000001766150501042300010563 0ustar00"""Fixer for sys.exc_{type, value, traceback} sys.exc_type -> sys.exc_info()[0] sys.exc_value -> sys.exc_info()[1] sys.exc_traceback -> sys.exc_info()[2] """ # By Jeff Balogh and Benjamin Peterson # Local imports from .. import fixer_base from ..fixer_util import Attr, Call, Name, Number, Subscript, Node, syms class FixSysExc(fixer_base.BaseFix): # This order matches the ordering of sys.exc_info(). exc_info = [u"exc_type", u"exc_value", u"exc_traceback"] PATTERN = """ power< 'sys' trailer< dot='.' attribute=(%s) > > """ % '|'.join("'%s'" % e for e in exc_info) def transform(self, node, results): sys_attr = results["attribute"][0] index = Number(self.exc_info.index(sys_attr.value)) call = Call(Name(u"exc_info"), prefix=sys_attr.prefix) attr = Attr(Name(u"sys"), call) attr[1].children[0].prefix = results["dot"].prefix attr.append(Subscript(index)) return Node(syms.power, attr, prefix=node.prefix) fixes/fix_except.pyo000066600000005750150501042300010552 0ustar00 Lc@sdZddklZddklZddklZddklZlZl Z l Z l Z l Z dZ deifdYZd S( sFixer for except statements with named exceptions. The following cases will be converted: - "except E, T:" where T is a name: except E as T: - "except E, T:" where T is not a name, tuple or list: except E as t: T = t This is done because the target of an "except" clause must be a name. - "except E, T:" where T is a tuple or list literal: except E as t: T = t.args i(tpytree(ttoken(t fixer_base(tAssigntAttrtNametis_tupletis_listtsymsccsfx_t|D]Q\}}|itijo2|ididjo|||dfVq^q q WdS(Niuexcepti(t enumeratettypeRt except_clausetchildrentvalue(tnodestitn((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyt find_exceptss  t FixExceptcBseZdZdZRS(s1 try_stmt< 'try' ':' (simple_stmt | suite) cleanup=(except_clause ':' (simple_stmt | suite))+ tail=(['except' ':' (simple_stmt | suite)] ['else' ':' (simple_stmt | suite)] ['finally' ':' (simple_stmt | suite)]) > cCsO|i}g}|dD]}||iq~}g}|dD]}||iqC~} xt| D]\} } t| idjot| idd!\} } }| itddd|iti jot|i dd}|i}d|_ |i||i}| i}x2t |D]$\}}t |tioPq/q/Wt|p t|o"t|t|td }nt||}x(t|| D]}| id |qW| i||q|i djo d|_ qqlqlWg}|id D]}||iq~| |}ti|i|S( Nttailtcleanupiiuastprefixu uuargsii(RtcloneRtlenR treplaceRR RtNAMEtnew_nameRR t isinstanceRtNodeRRRRtreversedt insert_child(tselftnodetresultsRt_[1]RRt_[2]tcht try_cleanupR te_suitetEtcommatNtnew_Nttargett suite_stmtsRtstmttassigntchildt_[3]tcR ((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyt transform.s< ++        "6(t__name__t __module__tPATTERNR2(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyR$sN(t__doc__tRtpgen2RRt fixer_utilRRRRRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_except.pyts . fixes/fix_except.py000066600000006377150501042300010401 0ustar00"""Fixer for except statements with named exceptions. The following cases will be converted: - "except E, T:" where T is a name: except E as T: - "except E, T:" where T is not a name, tuple or list: except E as t: T = t This is done because the target of an "except" clause must be a name. - "except E, T:" where T is a tuple or list literal: except E as t: T = t.args """ # Author: Collin Winter # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Assign, Attr, Name, is_tuple, is_list, syms def find_excepts(nodes): for i, n in enumerate(nodes): if n.type == syms.except_clause: if n.children[0].value == u'except': yield (n, nodes[i+2]) class FixExcept(fixer_base.BaseFix): PATTERN = """ try_stmt< 'try' ':' (simple_stmt | suite) cleanup=(except_clause ':' (simple_stmt | suite))+ tail=(['except' ':' (simple_stmt | suite)] ['else' ':' (simple_stmt | suite)] ['finally' ':' (simple_stmt | suite)]) > """ def transform(self, node, results): syms = self.syms tail = [n.clone() for n in results["tail"]] try_cleanup = [ch.clone() for ch in results["cleanup"]] for except_clause, e_suite in find_excepts(try_cleanup): if len(except_clause.children) == 4: (E, comma, N) = except_clause.children[1:4] comma.replace(Name(u"as", prefix=u" ")) if N.type != token.NAME: # Generate a new N for the except clause new_N = Name(self.new_name(), prefix=u" ") target = N.clone() target.prefix = u"" N.replace(new_N) new_N = new_N.clone() # Insert "old_N = new_N" as the first statement in # the except body. This loop skips leading whitespace # and indents #TODO(cwinter) suite-cleanup suite_stmts = e_suite.children for i, stmt in enumerate(suite_stmts): if isinstance(stmt, pytree.Node): break # The assignment is different if old_N is a tuple or list # In that case, the assignment is old_N = new_N.args if is_tuple(N) or is_list(N): assign = Assign(target, Attr(new_N, Name(u'args'))) else: assign = Assign(target, new_N) #TODO(cwinter) stopgap until children becomes a smart list for child in reversed(suite_stmts[:i]): e_suite.insert_child(0, child) e_suite.insert_child(i, assign) elif N.prefix == u"": # No space after a comma is legal; no space after "as", # not so much. N.prefix = u" " #TODO(cwinter) fix this when children becomes a smart list children = [c.clone() for c in node.children[:3]] + try_cleanup + tail return pytree.Node(node.type, children) fixes/fix_ne.pyc000066600000001751150501042300007645 0ustar00 Lc@sSdZddklZddklZddklZdeifdYZdS(sFixer that turns <> into !=.i(tpytree(ttoken(t fixer_basetFixNecBs#eZeiZdZdZRS(cCs |idjS(Nu<>(tvalue(tselftnode((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pytmatchscCs"titidd|i}|S(Nu!=tprefix(RtLeafRtNOTEQUALR(RRtresultstnew((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pyt transforms(t__name__t __module__RR t _accept_typeRR (((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pyR s  N(t__doc__tRtpgen2RRtBaseFixR(((s,/usr/lib64/python2.6/lib2to3/fixes/fix_ne.pytsfixes/fix_idioms.py000066600000011432150501042300010361 0ustar00"""Adjust some old Python 2 idioms to their modern counterparts. * Change some type comparisons to isinstance() calls: type(x) == T -> isinstance(x, T) type(x) is T -> isinstance(x, T) type(x) != T -> not isinstance(x, T) type(x) is not T -> not isinstance(x, T) * Change "while 1:" into "while True:". * Change both v = list(EXPR) v.sort() foo(v) and the more general v = EXPR v.sort() foo(v) into v = sorted(EXPR) foo(v) """ # Author: Jacques Frechet, Collin Winter # Local imports from .. import fixer_base from ..fixer_util import Call, Comma, Name, Node, BlankLine, syms CMP = "(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)" TYPE = "power< 'type' trailer< '(' x=any ')' > >" class FixIdioms(fixer_base.BaseFix): explicit = True # The user must ask for this fixer PATTERN = r""" isinstance=comparison< %s %s T=any > | isinstance=comparison< T=any %s %s > | while_stmt< 'while' while='1' ':' any+ > | sorted=any< any* simple_stmt< expr_stmt< id1=any '=' power< list='list' trailer< '(' (not arglist) any ')' > > > '\n' > sort= simple_stmt< power< id2=any trailer< '.' 'sort' > trailer< '(' ')' > > '\n' > next=any* > | sorted=any< any* simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' > sort= simple_stmt< power< id2=any trailer< '.' 'sort' > trailer< '(' ')' > > '\n' > next=any* > """ % (TYPE, CMP, CMP, TYPE) def match(self, node): r = super(FixIdioms, self).match(node) # If we've matched one of the sort/sorted subpatterns above, we # want to reject matches where the initial assignment and the # subsequent .sort() call involve different identifiers. if r and "sorted" in r: if r["id1"] == r["id2"]: return r return None return r def transform(self, node, results): if "isinstance" in results: return self.transform_isinstance(node, results) elif "while" in results: return self.transform_while(node, results) elif "sorted" in results: return self.transform_sort(node, results) else: raise RuntimeError("Invalid match") def transform_isinstance(self, node, results): x = results["x"].clone() # The thing inside of type() T = results["T"].clone() # The type being compared against x.prefix = u"" T.prefix = u" " test = Call(Name(u"isinstance"), [x, Comma(), T]) if "n" in results: test.prefix = u" " test = Node(syms.not_test, [Name(u"not"), test]) test.prefix = node.prefix return test def transform_while(self, node, results): one = results["while"] one.replace(Name(u"True", prefix=one.prefix)) def transform_sort(self, node, results): sort_stmt = results["sort"] next_stmt = results["next"] list_call = results.get("list") simple_expr = results.get("expr") if list_call: list_call.replace(Name(u"sorted", prefix=list_call.prefix)) elif simple_expr: new = simple_expr.clone() new.prefix = u"" simple_expr.replace(Call(Name(u"sorted"), [new], prefix=simple_expr.prefix)) else: raise RuntimeError("should not have reached here") sort_stmt.remove() btwn = sort_stmt.prefix # Keep any prefix lines between the sort_stmt and the list_call and # shove them right after the sorted() call. if u"\n" in btwn: if next_stmt: # The new prefix should be everything from the sort_stmt's # prefix up to the last newline, then the old prefix after a new # line. prefix_lines = (btwn.rpartition(u"\n")[0], next_stmt[0].prefix) next_stmt[0].prefix = u"\n".join(prefix_lines) else: assert list_call.parent assert list_call.next_sibling is None # Put a blank line after list_call and set its prefix. end_line = BlankLine() list_call.parent.append_child(end_line) assert list_call.next_sibling is end_line # The new prefix should be everything up to the first new line # of sort_stmt's prefix. end_line.prefix = btwn.rpartition(u"\n")[0] fixes/fix_itertools_imports.pyc000066600000003506150501042300013044 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(sA Fixer for imports of itertools.(imap|ifilter|izip|ifilterfalse) i(t fixer_base(t BlankLinetsymsttokentFixItertoolsImportscBseZdeZdZRS(sT import_from< 'from' 'itertools' 'import' imports=any > c Cs|d}|itijp |i o |g}n |i}x|dddD]}|itijo|i}|}n(|itijpt|id}|i}|d jod|_|i qR|djo|i d|_qRqRW|ip|g}t } x@|D]8}| o!|iti jo|i q| t N} qW|d iti jo|d i n|ipt |d d p|idjo |i} t}| |_|SdS( Ntimportsiiuimapuizipuifilteru ifilterfalseu filterfalseitvalue(uimapuizipuifilter(ttypeRtimport_as_nametchildrenRtNAMERtAssertionErrortNonetremovetchangedtTruetCOMMAtgetattrtparenttprefixR( tselftnodetresultsRR tchildtmembert name_nodet member_namet remove_commatp((s;/usr/lib64/python2.6/lib2to3/fixes/fix_itertools_imports.pyt transform sB              (t__name__t __module__tlocalstPATTERNR(((s;/usr/lib64/python2.6/lib2to3/fixes/fix_itertools_imports.pyRs N( t__doc__tlib2to3Rtlib2to3.fixer_utilRRRtBaseFixR(((s;/usr/lib64/python2.6/lib2to3/fixes/fix_itertools_imports.pytsfixes/fix_has_key.pyc000066600000006216150501042300010667 0ustar00 Lc@sidZddklZddklZddklZddklZlZdei fdYZ dS( s&Fixer for has_key(). Calls to .has_key() methods are expressed in terms of the 'in' operator: d.has_key(k) -> k in d CAVEATS: 1) While the primary target of this fixer is dict.has_key(), the fixer will change any has_key() method call, regardless of its class. 2) Cases like this will not be converted: m = d.has_key if m(k): ... Only *calls* to has_key() are converted. While it is possible to convert the above to something like m = d.__contains__ if m(k): ... this is currently not done. i(tpytree(ttoken(t fixer_base(tNamet parenthesizet FixHasKeycBseZdZdZRS(s anchor=power< before=any+ trailer< '.' 'has_key' > trailer< '(' ( not(arglist | argument) arg=any ','> ) ')' > after=any* > | negation=not_test< 'not' anchor=power< before=any+ trailer< '.' 'has_key' > trailer< '(' ( not(arglist | argument) arg=any ','> ) ')' > > > cCsv|pt|i}|ii|ijo|ii|iodS|id}|d}|i }g}|dD]}||i qy~} |di } |id} | o+g} | D]}| |i q~ } n| i|i |i|i |i |i|i|ifjot| } nt| djo| d} nti|i| } d| _ td d d} |o1td d d}ti|i|| f} nti|i | | | f}| o2t|}ti|i|ft| }n|ii|i |i|i|i|i|i|i|i|if jot|}n||_ |S( Ntnegationtanchortbeforetargtafteriiu uintprefixunot( tAssertionErrortsymstparentttypetnot_testtpatterntmatchtNonetgetR tclonet comparisontand_testtor_testttesttlambdeftargumentRtlenRtNodetpowerRtcomp_opttupletexprtxor_exprtand_exprt shift_exprt arith_exprttermtfactor(tselftnodetresultsR RRR t_[1]tnRR R t_[2]tn_optn_nottnew((s1/usr/lib64/python2.6/lib2to3/fixes/fix_has_key.pyt transformGsF   ++"  &   (t__name__t __module__tPATTERNR1(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_has_key.pyR'sN( t__doc__tRtpgen2RRt fixer_utilRRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_has_key.pyts fixes/fix_isinstance.py000066600000003061150501042300011234 0ustar00# Copyright 2008 Armin Ronacher. # Licensed to PSF under a Contributor Agreement. """Fixer that cleans up a tuple argument to isinstance after the tokens in it were fixed. This is mainly used to remove double occurrences of tokens as a leftover of the long -> int / unicode -> str conversion. eg. isinstance(x, (int, long)) -> isinstance(x, (int, int)) -> isinstance(x, int) """ from .. import fixer_base from ..fixer_util import token class FixIsinstance(fixer_base.BaseFix): PATTERN = """ power< 'isinstance' trailer< '(' arglist< any ',' atom< '(' args=testlist_gexp< any+ > ')' > > ')' > > """ run_order = 6 def transform(self, node, results): names_inserted = set() testlist = results["args"] args = testlist.children new_args = [] iterator = enumerate(args) for idx, arg in iterator: if arg.type == token.NAME and arg.value in names_inserted: if idx < len(args) - 1 and args[idx + 1].type == token.COMMA: iterator.next() continue else: new_args.append(arg) if arg.type == token.NAME: names_inserted.add(arg.value) if new_args and new_args[-1].type == token.COMMA: del new_args[-1] if len(new_args) == 1: atom = testlist.parent new_args[0].prefix = atom.prefix atom.replace(new_args[0]) else: args[:] = new_args node.changed() fixes/fix_has_key.pyo000066600000006153150501042300010703 0ustar00 Lc@sidZddklZddklZddklZddklZlZdei fdYZ dS( s&Fixer for has_key(). Calls to .has_key() methods are expressed in terms of the 'in' operator: d.has_key(k) -> k in d CAVEATS: 1) While the primary target of this fixer is dict.has_key(), the fixer will change any has_key() method call, regardless of its class. 2) Cases like this will not be converted: m = d.has_key if m(k): ... Only *calls* to has_key() are converted. While it is possible to convert the above to something like m = d.__contains__ if m(k): ... this is currently not done. i(tpytree(ttoken(t fixer_base(tNamet parenthesizet FixHasKeycBseZdZdZRS(s anchor=power< before=any+ trailer< '.' 'has_key' > trailer< '(' ( not(arglist | argument) arg=any ','> ) ')' > after=any* > | negation=not_test< 'not' anchor=power< before=any+ trailer< '.' 'has_key' > trailer< '(' ( not(arglist | argument) arg=any ','> ) ')' > > > cCsh|i}|ii|ijo|ii|iodS|id}|d}|i}g}|dD]}||i qk~} |di } |id} | o+g} | D]}| |i q~ } n| i|i |i|i |i |i |i|ifjot| } nt| djo| d} nti|i| } d| _td d d} |o1td d d}ti|i|| f} nti|i | | | f}| o2t|}ti|i|ft| }n|ii|i |i|i|i|i|i|i|i|if jot|}n||_|S( Ntnegationtanchortbeforetargtafteriiu uintprefixunot(tsymstparentttypetnot_testtpatterntmatchtNonetgetR tclonet comparisontand_testtor_testttesttlambdeftargumentRtlenRtNodetpowerRtcomp_opttupletexprtxor_exprtand_exprt shift_exprt arith_exprttermtfactor(tselftnodetresultsR RRR t_[1]tnRR R t_[2]tn_optn_nottnew((s1/usr/lib64/python2.6/lib2to3/fixes/fix_has_key.pyt transformGsD   ++"  &   (t__name__t __module__tPATTERNR0(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_has_key.pyR'sN( t__doc__tRtpgen2RRt fixer_utilRRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_has_key.pyts fixes/fix_long.pyo000066600000001466150501042300010221 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s/Fixer that turns 'long' into 'int' everywhere. i(t fixer_base(tis_probably_builtintFixLongcBseZdZdZRS(s'long'cCs(t|od|_|indS(Nuint(Rtvaluetchanged(tselftnodetresults((s./usr/lib64/python2.6/lib2to3/fixes/fix_long.pyt transforms  (t__name__t __module__tPATTERNR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_long.pyR sN(t__doc__tlib2to3Rtlib2to3.fixer_utilRtBaseFixR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_long.pytsfixes/fix_input.pyc000066600000002133150501042300010375 0ustar00 Lc@shdZddklZddklZlZddklZeidZdei fdYZ dS( s4Fixer that changes input(...) into eval(input(...)).i(t fixer_base(tCalltName(tpatcomps&power< 'eval' trailer< '(' any ')' > >tFixInputcBseZdZdZRS(sL power< 'input' args=trailer< '(' [any] ')' > > cCsOti|iiodS|i}d|_ttd|gd|iS(Nuuevaltprefix(tcontexttmatchtparenttcloneRRR(tselftnodetresultstnew((s//usr/lib64/python2.6/lib2to3/fixes/fix_input.pyt transforms   (t__name__t __module__tPATTERNR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_input.pyR sN( t__doc__tRt fixer_utilRRRtcompile_patternRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_input.pyts fixes/fix_tuple_params.py000066600000012520150501042300011570 0ustar00"""Fixer for function definitions with tuple parameters. def func(((a, b), c), d): ... -> def func(x, d): ((a, b), c) = x ... It will also support lambdas: lambda (x, y): x + y -> lambda t: t[0] + t[1] # The parens are a syntax error in Python 3 lambda (x): x + y -> lambda x: x + y """ # Author: Collin Winter # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Assign, Name, Newline, Number, Subscript, syms def is_docstring(stmt): return isinstance(stmt, pytree.Node) and \ stmt.children[0].type == token.STRING class FixTupleParams(fixer_base.BaseFix): PATTERN = """ funcdef< 'def' any parameters< '(' args=any ')' > ['->' any] ':' suite=any+ > | lambda= lambdef< 'lambda' args=vfpdef< '(' inner=any ')' > ':' body=any > """ def transform(self, node, results): if "lambda" in results: return self.transform_lambda(node, results) new_lines = [] suite = results["suite"] args = results["args"] # This crap is so "def foo(...): x = 5; y = 7" is handled correctly. # TODO(cwinter): suite-cleanup if suite[0].children[1].type == token.INDENT: start = 2 indent = suite[0].children[1].value end = Newline() else: start = 0 indent = u"; " end = pytree.Leaf(token.INDENT, u"") # We need access to self for new_name(), and making this a method # doesn't feel right. Closing over self and new_lines makes the # code below cleaner. def handle_tuple(tuple_arg, add_prefix=False): n = Name(self.new_name()) arg = tuple_arg.clone() arg.prefix = u"" stmt = Assign(arg, n.clone()) if add_prefix: n.prefix = u" " tuple_arg.replace(n) new_lines.append(pytree.Node(syms.simple_stmt, [stmt, end.clone()])) if args.type == syms.tfpdef: handle_tuple(args) elif args.type == syms.typedargslist: for i, arg in enumerate(args.children): if arg.type == syms.tfpdef: # Without add_prefix, the emitted code is correct, # just ugly. handle_tuple(arg, add_prefix=(i > 0)) if not new_lines: return # This isn't strictly necessary, but it plays nicely with other fixers. # TODO(cwinter) get rid of this when children becomes a smart list for line in new_lines: line.parent = suite[0] # TODO(cwinter) suite-cleanup after = start if start == 0: new_lines[0].prefix = u" " elif is_docstring(suite[0].children[start]): new_lines[0].prefix = indent after = start + 1 for line in new_lines: line.parent = suite[0] suite[0].children[after:after] = new_lines for i in range(after+1, after+len(new_lines)+1): suite[0].children[i].prefix = indent suite[0].changed() def transform_lambda(self, node, results): args = results["args"] body = results["body"] inner = simplify_args(results["inner"]) # Replace lambda ((((x)))): x with lambda x: x if inner.type == token.NAME: inner = inner.clone() inner.prefix = u" " args.replace(inner) return params = find_params(args) to_index = map_to_index(params) tup_name = self.new_name(tuple_name(params)) new_param = Name(tup_name, prefix=u" ") args.replace(new_param.clone()) for n in body.post_order(): if n.type == token.NAME and n.value in to_index: subscripts = [c.clone() for c in to_index[n.value]] new = pytree.Node(syms.power, [new_param.clone()] + subscripts) new.prefix = n.prefix n.replace(new) ### Helper functions for transform_lambda() def simplify_args(node): if node.type in (syms.vfplist, token.NAME): return node elif node.type == syms.vfpdef: # These look like vfpdef< '(' x ')' > where x is NAME # or another vfpdef instance (leading to recursion). while node.type == syms.vfpdef: node = node.children[1] return node raise RuntimeError("Received unexpected node %s" % node) def find_params(node): if node.type == syms.vfpdef: return find_params(node.children[1]) elif node.type == token.NAME: return node.value return [find_params(c) for c in node.children if c.type != token.COMMA] def map_to_index(param_list, prefix=[], d=None): if d is None: d = {} for i, obj in enumerate(param_list): trailer = [Subscript(Number(unicode(i)))] if isinstance(obj, list): map_to_index(obj, trailer, d=d) else: d[obj] = prefix + trailer return d def tuple_name(param_list): l = [] for obj in param_list: if isinstance(obj, list): l.append(tuple_name(obj)) else: l.append(obj) return u"_".join(l) fixes/fix_metaclass.pyo000066600000014721150501042300011234 0ustar00 Lc@sdZddklZddklZddklZlZlZl Z dZ dZ dZ dZ d Zd Zd eifd YZd S(sFixer for __metaclass__ = X -> (metaclass=X) methods. The various forms of classef (inherits nothing, inherits once, inherints many) don't parse the same in the CST so we look at ALL classes for a __metaclass__ and if we find one normalize the inherits to all be an arglist. For one-liner classes ('class X: pass') there is no indent/dedent so we normalize those into having a suite. Moving the __metaclass__ into the classdef can also cause the class body to be empty so there is some special casing for that as well. This fixer also tries very hard to keep original indenting and spacing in all those corner cases. i(t fixer_base(ttoken(tNametsymstNodetLeafcCsx|iD]}|itijo t|S|itijon|iod|id}|itijo@|io6|id}t|to|i djot Sqq q Wt S(s we have to check the cls_node without changing it. There are two possiblities: 1) clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta') 2) clsdef => simple_stmt => expr_stmt => Leaf('__meta') it __metaclass__( tchildrenttypeRtsuitet has_metaclasst simple_stmtt expr_stmtt isinstanceRtvaluetTruetFalse(tparenttnodet expr_nodet left_side((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyR s    cCsx)|iD]}|itijodSq WxAt|iD]$\}}|itijoPq<q<Wtdttig}xE|i|do2|i|d}|i |i |i qW|i ||}dS(sf one-line classes don't get a suite in the parse tree so we add one to normalize the tree NsNo class suite and no ':'!i( RRRR t enumerateRtCOLONt ValueErrorRt append_childtclonetremove(tcls_nodeRtiR t move_node((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pytfixup_parse_tree-s"      c Csx9t|iD]$\}}|itijoPqqWdS|ittig}tti |g}x=|i|o.|i|}|i |i |iqpW|i |||idid}|idid} | i |_ dS(s if there is a semi-colon all the parts count as part of the same simple_stmt. We just want the __metaclass__ part so we move everything efter the semi-colon into its own simple_stmt node Ni(RRRRtSEMIRRRR R RRt insert_childtprefix( RRt stmt_nodetsemi_indRtnew_exprtnew_stmtRt new_leaf1t old_leaf1((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pytfixup_simple_stmtGs"    cCs=|io/|iditijo|idindS(Ni(RRRtNEWLINER(R((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pytremove_trailing_newline_s$ccs x5|iD]}|itijoPq q Wtdxtt|iD]\}}|itijo|io|id}|itijog|io]|id}t |t o<|i djo,t |||t ||||fVqqqNqNWdS(NsNo class suite!iu __metaclass__(RRRR RtlistRR R R RRR(R*(RRRt simple_nodeRt left_node((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyt find_metasds        cCs|iddd}x0|o(|i}|itijoPqqWxt|ol|i}t|to/|itijo|io d|_ndS|i |idddqLWdS(s If an INDENT is followed by a thing with a prefix then nuke the prefix Otherwise we get in trouble when removing __metaclass__ at suite start Niu( RtpopRRtINDENTR RtDEDENTR!textend(R tkidsR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyt fixup_indent{s   #  t FixMetaclasscBseZdZdZRS(s classdef cCs!t|pdSt|d}x-t|D]\}}}|}|iq/W|idi}t|idjog|iditi jo|id}q|idi } t ti | g}|i d|nt|idjo&t ti g}|i d|nt|idjo^t ti g}|i dttid|i d||i dttidn td |idid} d | _| i} |io&|ittid d | _n d | _|id} d | id_d | id_|i|t||ipL|it|d} | | _|i| |ittidnt|idjos|iditijoY|iditijo?t|d} |i d| |i dttidndS(Niiiiiiu)u(sUnexpected class definitiont metaclassu,u uiupassu ii(R RtNoneR.RRRtlenRtarglistRRt set_childR RRtRPARtLPARRRR!RtCOMMAR4R)R0R1(tselfRtresultstlast_metaclassR Rtstmtt text_typeR9Rtmeta_txttorig_meta_prefixR t pass_leaf((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyt transforms`                 (t__name__t __module__tPATTERNRF(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyR5sN(t__doc__tRtpygramRt fixer_utilRRRRR RR(R*R.R4tBaseFixR5(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_metaclass.pyts"      fixes/fix_ws_comma.py000066600000002107150501042300010701 0ustar00"""Fixer that changes 'a ,b' into 'a, b'. This also changes '{a :b}' into '{a: b}', but does not touch other uses of colons. It does not touch other uses of whitespace. """ from .. import pytree from ..pgen2 import token from .. import fixer_base class FixWsComma(fixer_base.BaseFix): explicit = True # The user must ask for this fixers PATTERN = """ any<(not(',') any)+ ',' ((not(',') any)+ ',')* [not(',') any]> """ COMMA = pytree.Leaf(token.COMMA, u",") COLON = pytree.Leaf(token.COLON, u":") SEPS = (COMMA, COLON) def transform(self, node, results): new = node.clone() comma = False for child in new.children: if child in self.SEPS: prefix = child.prefix if prefix.isspace() and u"\n" not in prefix: child.prefix = u"" comma = True else: if comma: prefix = child.prefix if not prefix: child.prefix = u" " comma = False return new fixes/fix_getcwdu.py000066600000000653150501042300010542 0ustar00""" Fixer that changes os.getcwdu() to os.getcwd(). """ # Author: Victor Stinner # Local imports from .. import fixer_base from ..fixer_util import Name class FixGetcwdu(fixer_base.BaseFix): PATTERN = """ power< 'os' trailer< dot='.' name='getcwdu' > any* > """ def transform(self, node, results): name = results["name"] name.replace(Name(u"getcwd", prefix=name.prefix)) fixes/fix_filter.pyc000066600000004303150501042300010524 0ustar00 Lc@sedZddklZddklZddklZlZlZl Z dei fdYZ dS(sFixer that changes filter(F, X) into list(filter(F, X)). We avoid the transformation if the filter() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. NOTE: This is still not correct if the original code was depending on filter(F, X) to return a string if X is a string and a tuple if X is a tuple. That would require type inference, which we don't do. Let Python 2.6 figure it out. i(ttoken(t fixer_base(tNametCalltListComptin_special_contextt FixFiltercBseZdZdZdZRS(s filter_lambda=power< 'filter' trailer< '(' arglist< lambdef< 'lambda' (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any > ',' it=any > ')' > > | power< 'filter' trailer< '(' arglist< none='None' ',' seq=any > ')' > > | power< 'filter' args=trailer< '(' [any] ')' > > sfuture_builtins.filtercCs|i|odSd|joUt|idi|idi|idi|idi}nd|jo5ttdtd|ditd}n@t|odS|i}d|_ttd |g}|i|_|S( Nt filter_lambdatfptittxptnoneu_ftsequulist( t should_skipRtgettcloneRRtNonetprefixR(tselftnodetresultstnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_filter.pyt transform4s&         (t__name__t __module__tPATTERNtskip_onR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_filter.pyRsN( t__doc__tpgen2RtRt fixer_utilRRRRtConditionalFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_filter.pyts"fixes/fix_apply.py000066600000003527150501042300010230 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for apply(). This converts apply(func, v, k) into (func)(*v, **k).""" # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Call, Comma, parenthesize class FixApply(fixer_base.BaseFix): PATTERN = """ power< 'apply' trailer< '(' arglist< (not argument ')' > > """ def transform(self, node, results): syms = self.syms assert results func = results["func"] args = results["args"] kwds = results.get("kwds") prefix = node.prefix func = func.clone() if (func.type not in (token.NAME, syms.atom) and (func.type != syms.power or func.children[-2].type == token.DOUBLESTAR)): # Need to parenthesize func = parenthesize(func) func.prefix = "" args = args.clone() args.prefix = "" if kwds is not None: kwds = kwds.clone() kwds.prefix = "" l_newargs = [pytree.Leaf(token.STAR, u"*"), args] if kwds is not None: l_newargs.extend([Comma(), pytree.Leaf(token.DOUBLESTAR, u"**"), kwds]) l_newargs[-2].prefix = u" " # that's the ** token # XXX Sometimes we could be cleverer, e.g. apply(f, (x, y) + t) # can be translated into f(x, y, *t) instead of f(*(x, y) + t) #new = pytree.Node(syms.power, (func, ArgList(l_newargs))) return Call(func, l_newargs, prefix=prefix) fixes/fix_zip.py000066600000001557150501042300007706 0ustar00""" Fixer that changes zip(seq0, seq1, ...) into list(zip(seq0, seq1, ...) unless there exists a 'from future_builtins import zip' statement in the top-level namespace. We avoid the transformation if the zip() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. """ # Local imports from .. import fixer_base from ..fixer_util import Name, Call, in_special_context class FixZip(fixer_base.ConditionalFix): PATTERN = """ power< 'zip' args=trailer< '(' [any] ')' > > """ skip_on = "future_builtins.zip" def transform(self, node, results): if self.should_skip(node): return if in_special_context(node): return None new = node.clone() new.prefix = u"" new = Call(Name(u"list"), [new]) new.prefix = node.prefix return new fixes/fix_input.pyo000066600000002133150501042300010411 0ustar00 Lc@shdZddklZddklZlZddklZeidZdei fdYZ dS( s4Fixer that changes input(...) into eval(input(...)).i(t fixer_base(tCalltName(tpatcomps&power< 'eval' trailer< '(' any ')' > >tFixInputcBseZdZdZRS(sL power< 'input' args=trailer< '(' [any] ')' > > cCsOti|iiodS|i}d|_ttd|gd|iS(Nuuevaltprefix(tcontexttmatchtparenttcloneRRR(tselftnodetresultstnew((s//usr/lib64/python2.6/lib2to3/fixes/fix_input.pyt transforms   (t__name__t __module__tPATTERNR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_input.pyR sN( t__doc__tRt fixer_utilRRRtcompile_patternRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_input.pyts fixes/fix_xreadlines.pyo000066600000002176150501042300011417 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(spFix "for x in f.xreadlines()" -> "for x in f". This fixer will also convert g(f.xreadlines) into g(f.__iter__).i(t fixer_base(tNamet FixXreadlinescBseZdZdZRS(s power< call=any+ trailer< '.' 'xreadlines' > trailer< '(' ')' > > | power< any+ trailer< '.' no_call='xreadlines' > > cCsl|id}|o |itdd|in3|ig}|dD]}||iqK~dS(Ntno_callu__iter__tprefixtcall(tgettreplaceRRtclone(tselftnodetresultsRt_[1]tx((s4/usr/lib64/python2.6/lib2to3/fixes/fix_xreadlines.pyt transforms (t__name__t __module__tPATTERNR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_xreadlines.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_xreadlines.pytsfixes/fix_next.py000066600000006124150501042300010055 0ustar00"""Fixer for it.next() -> next(it), per PEP 3114.""" # Author: Collin Winter # Things that currently aren't covered: # - listcomp "next" names aren't warned # - "with" statement targets aren't checked # Local imports from ..pgen2 import token from ..pygram import python_symbols as syms from .. import fixer_base from ..fixer_util import Name, Call, find_binding bind_warning = "Calls to builtin next() possibly shadowed by global binding" class FixNext(fixer_base.BaseFix): PATTERN = """ power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > > | power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > > | classdef< 'class' any+ ':' suite< any* funcdef< 'def' name='next' parameters< '(' NAME ')' > any+ > any* > > | global=global_stmt< 'global' any* 'next' any* > """ order = "pre" # Pre-order tree traversal def start_tree(self, tree, filename): super(FixNext, self).start_tree(tree, filename) n = find_binding(u'next', tree) if n: self.warning(n, bind_warning) self.shadowed_next = True else: self.shadowed_next = False def transform(self, node, results): assert results base = results.get("base") attr = results.get("attr") name = results.get("name") if base: if self.shadowed_next: attr.replace(Name(u"__next__", prefix=attr.prefix)) else: base = [n.clone() for n in base] base[0].prefix = u"" node.replace(Call(Name(u"next", prefix=node.prefix), base)) elif name: n = Name(u"__next__", prefix=name.prefix) name.replace(n) elif attr: # We don't do this transformation if we're assigning to "x.next". # Unfortunately, it doesn't seem possible to do this in PATTERN, # so it's being done here. if is_assign_target(node): head = results["head"] if "".join([str(n) for n in head]).strip() == u'__builtin__': self.warning(node, bind_warning) return attr.replace(Name(u"__next__")) elif "global" in results: self.warning(node, bind_warning) self.shadowed_next = True ### The following functions help test if node is part of an assignment ### target. def is_assign_target(node): assign = find_assign(node) if assign is None: return False for child in assign.children: if child.type == token.EQUAL: return False elif is_subtree(child, node): return True return False def find_assign(node): if node.type == syms.expr_stmt: return node if node.type == syms.simple_stmt or node.parent is None: return None return find_assign(node.parent) def is_subtree(root, node): if root == node: return True return any(is_subtree(c, node) for c in root.children) fixes/fix_print.py000066600000005427150501042300010240 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer for print. Change: 'print' into 'print()' 'print ...' into 'print(...)' 'print ... ,' into 'print(..., end=" ")' 'print >>x, ...' into 'print(..., file=x)' No changes are applied if print_function is imported from __future__ """ # Local imports from .. import patcomp from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, Call, Comma, String, is_tuple parend_expr = patcomp.compile_pattern( """atom< '(' [atom|STRING|NAME] ')' >""" ) class FixPrint(fixer_base.BaseFix): PATTERN = """ simple_stmt< any* bare='print' any* > | print_stmt """ def transform(self, node, results): assert results bare_print = results.get("bare") if bare_print: # Special-case print all by itself bare_print.replace(Call(Name(u"print"), [], prefix=bare_print.prefix)) return assert node.children[0] == Name(u"print") args = node.children[1:] if len(args) == 1 and parend_expr.match(args[0]): # We don't want to keep sticking parens around an # already-parenthesised expression. return sep = end = file = None if args and args[-1] == Comma(): args = args[:-1] end = " " if args and args[0] == pytree.Leaf(token.RIGHTSHIFT, u">>"): assert len(args) >= 2 file = args[1].clone() args = args[3:] # Strip a possible comma after the file expression # Now synthesize a print(args, sep=..., end=..., file=...) node. l_args = [arg.clone() for arg in args] if l_args: l_args[0].prefix = u"" if sep is not None or end is not None or file is not None: if sep is not None: self.add_kwarg(l_args, u"sep", String(repr(sep))) if end is not None: self.add_kwarg(l_args, u"end", String(repr(end))) if file is not None: self.add_kwarg(l_args, u"file", file) n_stmt = Call(Name(u"print"), l_args) n_stmt.prefix = node.prefix return n_stmt def add_kwarg(self, l_nodes, s_kwd, n_expr): # XXX All this prefix-setting may lose comments (though rarely) n_expr.prefix = u"" n_argument = pytree.Node(self.syms.argument, (Name(s_kwd), pytree.Leaf(token.EQUAL, u"="), n_expr)) if l_nodes: l_nodes.append(Comma()) n_argument.prefix = u" " l_nodes.append(n_argument) fixes/fix_throw.py000066600000003032150501042300010235 0ustar00"""Fixer for generator.throw(E, V, T). g.throw(E) -> g.throw(E) g.throw(E, V) -> g.throw(E(V)) g.throw(E, V, T) -> g.throw(E(V).with_traceback(T)) g.throw("foo"[, V[, T]]) will warn about string exceptions.""" # Author: Collin Winter # Local imports from .. import pytree from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, Call, ArgList, Attr, is_tuple class FixThrow(fixer_base.BaseFix): PATTERN = """ power< any trailer< '.' 'throw' > trailer< '(' args=arglist< exc=any ',' val=any [',' tb=any] > ')' > > | power< any trailer< '.' 'throw' > trailer< '(' exc=any ')' > > """ def transform(self, node, results): syms = self.syms exc = results["exc"].clone() if exc.type is token.STRING: self.cannot_convert(node, "Python 3 does not support string exceptions") return # Leave "g.throw(E)" alone val = results.get(u"val") if val is None: return val = val.clone() if is_tuple(val): args = [c.clone() for c in val.children[1:-1]] else: val.prefix = u"" args = [val] throw_args = results["args"] if "tb" in results: tb = results["tb"].clone() tb.prefix = u"" e = Call(exc, args) with_tb = Attr(e, Name(u'with_traceback')) + [ArgList([tb])] throw_args.replace(pytree.Node(syms.power, with_tb)) else: throw_args.replace(Call(exc, args)) fixes/fix_filter.py000066600000004042150501042300010361 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that changes filter(F, X) into list(filter(F, X)). We avoid the transformation if the filter() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. NOTE: This is still not correct if the original code was depending on filter(F, X) to return a string if X is a string and a tuple if X is a tuple. That would require type inference, which we don't do. Let Python 2.6 figure it out. """ # Local imports from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, Call, ListComp, in_special_context class FixFilter(fixer_base.ConditionalFix): PATTERN = """ filter_lambda=power< 'filter' trailer< '(' arglist< lambdef< 'lambda' (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any > ',' it=any > ')' > > | power< 'filter' trailer< '(' arglist< none='None' ',' seq=any > ')' > > | power< 'filter' args=trailer< '(' [any] ')' > > """ skip_on = "future_builtins.filter" def transform(self, node, results): if self.should_skip(node): return if "filter_lambda" in results: new = ListComp(results.get("fp").clone(), results.get("fp").clone(), results.get("it").clone(), results.get("xp").clone()) elif "none" in results: new = ListComp(Name(u"_f"), Name(u"_f"), results["seq"].clone(), Name(u"_f")) else: if in_special_context(node): return None new = node.clone() new.prefix = u"" new = Call(Name(u"list"), [new]) new.prefix = node.prefix return new fixes/fix_operator.pyo000066600000003341150501042300011107 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(sFixer for operator.{isCallable,sequenceIncludes} operator.isCallable(obj) -> hasattr(obj, '__call__') operator.sequenceIncludes(obj) -> operator.contains(obj) i(t fixer_base(tCalltNametStringt FixOperatorcBs6eZdZdZdededeZdZRS(s(method=('isCallable'|'sequenceIncludes')s'(' func=any ')'s power< module='operator' trailer< '.' %(methods)s > trailer< %(func)s > > | power< %(methods)s trailer< %(func)s > > tmethodstfunccCs|dd}|idjo8d|jo|i|dqd|_|in|idjowd|jo|i|d|d iq|d }|itd td g}ttd |d |iSndS(NtmethodiusequenceIncludestmodules&You should use operator.contains here.ucontainsu isCallables,You should use hasattr(%s, '__call__') here.Ru, u '__call__'uhasattrtprefix(tvaluetwarningtchangedtcloneRRRR (tselftnodetresultsRRtargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_operator.pyt transforms     !(t__name__t __module__RRtdicttPATTERNR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_operator.pyR sN( t__doc__tRt fixer_utilRRRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_operator.pytsfixes/__init__.pyo000066600000000205150501042300010141 0ustar00 Lc@sdS(N((((s./usr/lib64/python2.6/lib2to3/fixes/__init__.pytsfixes/fix_ws_comma.pyc000066600000002577150501042300011057 0ustar00 Lc@sSdZddklZddklZddklZdeifdYZdS(sFixer that changes 'a ,b' into 'a, b'. This also changes '{a :b}' into '{a: b}', but does not touch other uses of colons. It does not touch other uses of whitespace. i(tpytree(ttoken(t fixer_baset FixWsCommacBsSeZeZdZeieidZeiei dZ ee fZ dZ RS(sH any<(not(',') any)+ ',' ((not(',') any)+ ',')* [not(',') any]> u,u:cCs|i}t}x|iD]~}||ijo:|i}|iod|jo d|_nt}q|o!|i}|p d|_qnt}qW|S(Nu uu (tclonetFalsetchildrentSEPStprefixtisspacetTrue(tselftnodetresultstnewtcommatchildR((s2/usr/lib64/python2.6/lib2to3/fixes/fix_ws_comma.pyt transforms       ( t__name__t __module__R texplicittPATTERNRtLeafRtCOMMAtCOLONRR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_ws_comma.pyR s  N(t__doc__tRtpgen2RRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_ws_comma.pytsfixes/fix_future.pyc000066600000001602150501042300010550 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(sVRemove __future__ imports from __future__ import foo is replaced with an empty line. i(t fixer_base(t BlankLinet FixFuturecBseZdZdZdZRS(s;import_from< 'from' module_name="__future__" 'import' any >i cCst}|i|_|S(N(Rtprefix(tselftnodetresultstnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_future.pyt transforms  (t__name__t __module__tPATTERNt run_orderR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_future.pyR sN(t__doc__tRt fixer_utilRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_future.pytsfixes/fix_import.pyo000066600000006314150501042300010571 0ustar00 Lc@szdZddklZddklZlZlZlZddkl Z l Z l Z dZ dei fdYZd S( sFixer for import statements. If spam is being imported from the local directory, this import: from spam import eggs Becomes: from .spam import eggs And this import: import spam Becomes: from . import spam i(t fixer_basei(tdirnametjointexiststsep(t FromImporttsymsttokenccs|g}x|o|i}|itijo |iVq |itijo3dig}|iD]}||iqe~Vq |iti jo|i |idq |iti jo!|i |idddq t dq WdS(sF Walks over all the names imported in a dotted_as_names node. tiNisunkown node type(tpopttypeRtNAMEtvalueRt dotted_nameRtchildrentdotted_as_nametappendtdotted_as_namestextendtAssertionError(tnamestpendingtnodet_[1]tch((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyttraverse_importss   3!t FixImportcBs)eZdZdZdZdZRS(sj import_from< 'from' imp=any 'import' ['('] any [')'] > | import_name< 'import' imp=any > cCs/tt|i||d|ij|_dS(Ntabsolute_import(tsuperRt start_treetfuture_featurestskip(tselfttreetname((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyR.scCs|iodS|d}|itijoZx"t|dp|id}q/W|i|iod|i|_|iqnt }t }x4t |D]&}|i|o t }qt }qW|o |o|i |dndSt d|g}|i|_|SdS(NtimpR iu.s#absolute and local imports together(RR Rt import_fromthasattrRtprobably_a_local_importR tchangedtFalseRtTruetwarningRtprefix(R RtresultsR#t have_localt have_absolutetmod_nametnew((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyt transform2s0      cCs|idotS|iddd}t|i}t||}ttt|dptSx6dtdddd gD]}t||otSqWtS( Nu.iis __init__.pys.pys.pycs.sos.sls.pyd( t startswithR(tsplitRtfilenameRRRR)(R timp_namet base_pathtext((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyR&Ts (t__name__t __module__tPATTERNRR1R&(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyR&s  "N(t__doc__RRtos.pathRRRRt fixer_utilRRRRtBaseFixR(((s0/usr/lib64/python2.6/lib2to3/fixes/fix_import.pyt s " fixes/fix_metaclass.py000066600000017774150501042300011070 0ustar00"""Fixer for __metaclass__ = X -> (metaclass=X) methods. The various forms of classef (inherits nothing, inherits once, inherints many) don't parse the same in the CST so we look at ALL classes for a __metaclass__ and if we find one normalize the inherits to all be an arglist. For one-liner classes ('class X: pass') there is no indent/dedent so we normalize those into having a suite. Moving the __metaclass__ into the classdef can also cause the class body to be empty so there is some special casing for that as well. This fixer also tries very hard to keep original indenting and spacing in all those corner cases. """ # Author: Jack Diederich # Local imports from .. import fixer_base from ..pygram import token from ..fixer_util import Name, syms, Node, Leaf def has_metaclass(parent): """ we have to check the cls_node without changing it. There are two possiblities: 1) clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta') 2) clsdef => simple_stmt => expr_stmt => Leaf('__meta') """ for node in parent.children: if node.type == syms.suite: return has_metaclass(node) elif node.type == syms.simple_stmt and node.children: expr_node = node.children[0] if expr_node.type == syms.expr_stmt and expr_node.children: left_side = expr_node.children[0] if isinstance(left_side, Leaf) and \ left_side.value == '__metaclass__': return True return False def fixup_parse_tree(cls_node): """ one-line classes don't get a suite in the parse tree so we add one to normalize the tree """ for node in cls_node.children: if node.type == syms.suite: # already in the prefered format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up for i, node in enumerate(cls_node.children): if node.type == token.COLON: break else: raise ValueError("No class suite and no ':'!") # move everything into a suite node suite = Node(syms.suite, []) while cls_node.children[i+1:]: move_node = cls_node.children[i+1] suite.append_child(move_node.clone()) move_node.remove() cls_node.append_child(suite) node = suite def fixup_simple_stmt(parent, i, stmt_node): """ if there is a semi-colon all the parts count as part of the same simple_stmt. We just want the __metaclass__ part so we move everything efter the semi-colon into its own simple_stmt node """ for semi_ind, node in enumerate(stmt_node.children): if node.type == token.SEMI: # *sigh* break else: return node.remove() # kill the semicolon new_expr = Node(syms.expr_stmt, []) new_stmt = Node(syms.simple_stmt, [new_expr]) while stmt_node.children[semi_ind:]: move_node = stmt_node.children[semi_ind] new_expr.append_child(move_node.clone()) move_node.remove() parent.insert_child(i, new_stmt) new_leaf1 = new_stmt.children[0].children[0] old_leaf1 = stmt_node.children[0].children[0] new_leaf1.prefix = old_leaf1.prefix def remove_trailing_newline(node): if node.children and node.children[-1].type == token.NEWLINE: node.children[-1].remove() def find_metas(cls_node): # find the suite node (Mmm, sweet nodes) for node in cls_node.children: if node.type == syms.suite: break else: raise ValueError("No class suite!") # look for simple_stmt[ expr_stmt[ Leaf('__metaclass__') ] ] for i, simple_node in list(enumerate(node.children)): if simple_node.type == syms.simple_stmt and simple_node.children: expr_node = simple_node.children[0] if expr_node.type == syms.expr_stmt and expr_node.children: # Check if the expr_node is a simple assignment. left_node = expr_node.children[0] if isinstance(left_node, Leaf) and \ left_node.value == u'__metaclass__': # We found a assignment to __metaclass__. fixup_simple_stmt(node, i, simple_node) remove_trailing_newline(simple_node) yield (node, i, simple_node) def fixup_indent(suite): """ If an INDENT is followed by a thing with a prefix then nuke the prefix Otherwise we get in trouble when removing __metaclass__ at suite start """ kids = suite.children[::-1] # find the first indent while kids: node = kids.pop() if node.type == token.INDENT: break # find the first Leaf while kids: node = kids.pop() if isinstance(node, Leaf) and node.type != token.DEDENT: if node.prefix: node.prefix = u'' return else: kids.extend(node.children[::-1]) class FixMetaclass(fixer_base.BaseFix): PATTERN = """ classdef """ def transform(self, node, results): if not has_metaclass(node): return fixup_parse_tree(node) # find metaclasses, keep the last one last_metaclass = None for suite, i, stmt in find_metas(node): last_metaclass = stmt stmt.remove() text_type = node.children[0].type # always Leaf(nnn, 'class') # figure out what kind of classdef we have if len(node.children) == 7: # Node(classdef, ['class', 'name', '(', arglist, ')', ':', suite]) # 0 1 2 3 4 5 6 if node.children[3].type == syms.arglist: arglist = node.children[3] # Node(classdef, ['class', 'name', '(', 'Parent', ')', ':', suite]) else: parent = node.children[3].clone() arglist = Node(syms.arglist, [parent]) node.set_child(3, arglist) elif len(node.children) == 6: # Node(classdef, ['class', 'name', '(', ')', ':', suite]) # 0 1 2 3 4 5 arglist = Node(syms.arglist, []) node.insert_child(3, arglist) elif len(node.children) == 4: # Node(classdef, ['class', 'name', ':', suite]) # 0 1 2 3 arglist = Node(syms.arglist, []) node.insert_child(2, Leaf(token.RPAR, u')')) node.insert_child(2, arglist) node.insert_child(2, Leaf(token.LPAR, u'(')) else: raise ValueError("Unexpected class definition") # now stick the metaclass in the arglist meta_txt = last_metaclass.children[0].children[0] meta_txt.value = 'metaclass' orig_meta_prefix = meta_txt.prefix if arglist.children: arglist.append_child(Leaf(token.COMMA, u',')) meta_txt.prefix = u' ' else: meta_txt.prefix = u'' # compact the expression "metaclass = Meta" -> "metaclass=Meta" expr_stmt = last_metaclass.children[0] assert expr_stmt.type == syms.expr_stmt expr_stmt.children[1].prefix = u'' expr_stmt.children[2].prefix = u'' arglist.append_child(last_metaclass) fixup_indent(suite) # check for empty suite if not suite.children: # one-liner that was just __metaclass_ suite.remove() pass_leaf = Leaf(text_type, u'pass') pass_leaf.prefix = orig_meta_prefix node.append_child(pass_leaf) node.append_child(Leaf(token.NEWLINE, u'\n')) elif len(suite.children) > 1 and \ (suite.children[-2].type == token.INDENT and suite.children[-1].type == token.DEDENT): # there was only one line in the class body and it was __metaclass__ pass_leaf = Leaf(text_type, u'pass') suite.insert_child(-1, pass_leaf) suite.insert_child(-1, Leaf(token.NEWLINE, u'\n')) fixes/fix_isinstance.pyo000066600000003450150501042300011415 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s,Fixer that cleans up a tuple argument to isinstance after the tokens in it were fixed. This is mainly used to remove double occurrences of tokens as a leftover of the long -> int / unicode -> str conversion. eg. isinstance(x, (int, long)) -> isinstance(x, (int, int)) -> isinstance(x, int) i(t fixer_base(ttokent FixIsinstancecBseZdZdZdZRS(s power< 'isinstance' trailer< '(' arglist< any ',' atom< '(' args=testlist_gexp< any+ > ')' > > ')' > > ic Csbt}|d}|i}g}t|}x|D]\}} | itijoW| i|joG|t|djo,||ditijo|i q5qq5|i | | itijo|i | iq5q5W|o"|ditijo |d=nt|djo.|i } | i |d_ | i|dn||(|idS(Ntargsiii(tsettchildrent enumeratettypeRtNAMEtvaluetlentCOMMAtnexttappendtaddtparenttprefixtreplacetchanged( tselftnodetresultstnames_insertedttestlistRtnew_argstiteratortidxtargtatom((s4/usr/lib64/python2.6/lib2to3/fixes/fix_isinstance.pyt transforms,     #2     (t__name__t __module__tPATTERNt run_orderR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_isinstance.pyRs N(t__doc__tRt fixer_utilRtBaseFixR(((s4/usr/lib64/python2.6/lib2to3/fixes/fix_isinstance.pyt sfixes/fix_repr.py000066600000001115150501042300010042 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that transforms `xyzzy` into repr(xyzzy).""" # Local imports from .. import fixer_base from ..fixer_util import Call, Name, parenthesize class FixRepr(fixer_base.BaseFix): PATTERN = """ atom < '`' expr=any '`' > """ def transform(self, node, results): expr = results["expr"].clone() if expr.type == self.syms.testlist1: expr = parenthesize(expr) return Call(Name(u"repr"), [expr], prefix=node.prefix) fixes/fix_exec.pyc000066600000002573150501042300010172 0ustar00 Lc@s_dZddklZddklZddklZlZlZdeifdYZ dS(sFixer for exec. This converts usages of the exec statement into calls to a built-in exec() function. exec code in ns1, ns2 -> exec(code, ns1, ns2) i(tpytree(t fixer_base(tCommatNametCalltFixExeccBseZdZdZRS(sx exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > | exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > cCs|pt|i}|d}|id}|id}|ig}d|d_|dj o |it|ign|dj o |it|igntt d|d|iS(Ntatbtctiuexectprefix( tAssertionErrortsymstgettcloneR tNonetextendRRR(tselftnodetresultsR RRRtargs((s./usr/lib64/python2.6/lib2to3/fixes/fix_exec.pyt transforms       (t__name__t __module__tPATTERNR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_exec.pyRsN( t__doc__R RRt fixer_utilRRRtBaseFixR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_exec.pyt sfixes/fix_sys_exc.pyc000066600000003232150501042300010714 0ustar00 Lc@sgdZddklZddklZlZlZlZlZl Z l Z dei fdYZ dS(sFixer for sys.exc_{type, value, traceback} sys.exc_type -> sys.exc_info()[0] sys.exc_value -> sys.exc_info()[1] sys.exc_traceback -> sys.exc_info()[2] i(t fixer_base(tAttrtCalltNametNumbert SubscripttNodetsymst FixSysExccBs=eZdddgZddideDZdZRS(uexc_typeu exc_valueu exc_tracebacksN power< 'sys' trailer< dot='.' attribute=(%s) > > t|ccsx|]}d|VqWdS(s'%s'N((t.0te((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pys s cCs|dd}t|ii|i}ttdd|i}ttd|}|di|did_|i t |t t i |d|iS(Nt attributeiuexc_infotprefixusystdoti(Rtexc_infotindextvalueRRR RtchildrentappendRRRtpower(tselftnodetresultstsys_attrRtcalltattr((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pyt transforms(t__name__t __module__RtjointPATTERNR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pyRsN( t__doc__tRt fixer_utilRRRRRRRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pyts4fixes/fix_itertools.py000066600000002700150501042300011117 0ustar00""" Fixer for itertools.(imap|ifilter|izip) --> (map|filter|zip) and itertools.ifilterfalse --> itertools.filterfalse (bugs 2360-2363) imports from itertools are fixed in fix_itertools_import.py If itertools is imported as something else (ie: import itertools as it; it.izip(spam, eggs)) method calls will not get fixed. """ # Local imports from .. import fixer_base from ..fixer_util import Name class FixItertools(fixer_base.BaseFix): it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" PATTERN = """ power< it='itertools' trailer< dot='.' func=%(it_funcs)s > trailer< '(' [any] ')' > > | power< func=%(it_funcs)s trailer< '(' [any] ')' > > """ %(locals()) # Needs to be run after fix_(map|zip|filter) run_order = 6 def transform(self, node, results): prefix = None func = results['func'][0] if 'it' in results and func.value != u'ifilterfalse': dot, it = (results['dot'], results['it']) # Remove the 'itertools' prefix = it.prefix it.remove() # Replace the node wich contains ('.', 'function') with the # function (to be consistant with the second part of the pattern) dot.remove() func.parent.replace(func) prefix = prefix or func.prefix func.replace(Name(func.value[1:], prefix=prefix)) fixes/fix_dict.pyc000066600000007315150501042300010170 0ustar00 Lc @sdZddklZddklZddklZddklZddklZl Z l Z l Z l Z l Z ddklZeiedgBZd eifd YZd S( sjFixer for dict methods. d.keys() -> list(d.keys()) d.items() -> list(d.items()) d.values() -> list(d.values()) d.iterkeys() -> iter(d.keys()) d.iteritems() -> iter(d.items()) d.itervalues() -> iter(d.values()) d.viewkeys() -> d.keys() d.viewitems() -> d.items() d.viewvalues() -> d.values() Except in certain very specific contexts: the iter() can be dropped when the context is list(), sorted(), iter() or for...in; the list() can be dropped when the context is list() or sorted() (but not iter() or for...in!). Special contexts that apply to both: list(), sorted(), tuple() set(), any(), all(), sum(). Note: iter(d.keys()) could be written as iter(d) but since the original d.iterkeys() was also redundant we don't fix this. And there are (rare) contexts where it makes a difference (e.g. when passing it as an argument to a function that introspects the argument). i(tpytree(tpatcomp(ttoken(t fixer_base(tNametCalltLParentRParentArgListtDot(t fixer_utiltitertFixDictcBsJeZdZdZdZeieZdZeieZ dZ RS(s power< head=any+ trailer< '.' method=('keys'|'items'|'values'| 'iterkeys'|'iteritems'|'itervalues'| 'viewkeys'|'viewitems'|'viewvalues') > parens=trailer< '(' ')' > tail=any* > c Cs|d}|dd}|d}|i}|i}|id}|id} |p| o|d}n|djptt|g} |D]} | | iq~ }g} |D]} | | iq~ }| o|i||} |ti|i t t |d |i g|d ig}ti|i |}| p| p3d |_ tt |odnd|g}n|o ti|i |g|}n|i |_ |S(Ntheadtmethodittailuiteruviewiukeysuitemsuvaluestprefixtparensuulist(ukeysuitemsuvalues(tsymstvaluet startswithtAssertionErrortreprtclonetin_special_contextRtNodettrailerR RRtpowerR(tselftnodetresultsR RRRt method_nametisitertisviewt_[1]tnt_[2]tspecialtargstnew((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyt transform5s4    ''  *  s3power< func=NAME trailer< '(' node=any ')' > any* >smfor_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > cCs|idjotSh}|iidj o^|ii|ii|oB|d|jo1|o|ditjS|ditijSn|ptS|i i|i|o|d|jS(NRtfunc( tparenttNonetFalsetp1tmatchRt iter_exemptR tconsuming_callstp2(RRR R((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyRYs( t__name__t __module__tPATTERNR(tP1Rtcompile_patternR-tP2R1R(((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyR *s  N(t__doc__tRRtpgen2RRR RRRRRR R0tsetR/tBaseFixR (((s./usr/lib64/python2.6/lib2to3/fixes/fix_dict.pyts.fixes/fix_raw_input.pyc000066600000001623150501042300011251 0ustar00 Lc@sCdZddklZddklZdeifdYZdS(s2Fixer that changes raw_input(...) into input(...).i(t fixer_base(tNamet FixRawInputcBseZdZdZRS(sU power< name='raw_input' trailer< '(' [any] ')' > any* > cCs*|d}|itdd|idS(Ntnameuinputtprefix(treplaceRR(tselftnodetresultsR((s3/usr/lib64/python2.6/lib2to3/fixes/fix_raw_input.pyt transforms (t__name__t __module__tPATTERNR (((s3/usr/lib64/python2.6/lib2to3/fixes/fix_raw_input.pyRsN(t__doc__tRt fixer_utilRtBaseFixR(((s3/usr/lib64/python2.6/lib2to3/fixes/fix_raw_input.pytsfixes/fix_set_literal.py000066600000003212150501042300011401 0ustar00""" Optional fixer to transform set() calls to set literals. """ # Author: Benjamin Peterson from lib2to3 import fixer_base, pytree from lib2to3.fixer_util import token, syms class FixSetLiteral(fixer_base.BaseFix): explicit = True PATTERN = """power< 'set' trailer< '(' (atom=atom< '[' (items=listmaker< any ((',' any)* [',']) > | single=any) ']' > | atom< '(' items=testlist_gexp< any ((',' any)* [',']) > ')' > ) ')' > > """ def transform(self, node, results): single = results.get("single") if single: # Make a fake listmaker fake = pytree.Node(syms.listmaker, [single.clone()]) single.replace(fake) items = fake else: items = results["items"] # Build the contents of the literal literal = [pytree.Leaf(token.LBRACE, u"{")] literal.extend(n.clone() for n in items.children) literal.append(pytree.Leaf(token.RBRACE, u"}")) # Set the prefix of the right brace to that of the ')' or ']' literal[-1].prefix = items.next_sibling.prefix maker = pytree.Node(syms.dictsetmaker, literal) maker.prefix = node.prefix # If the original was a one tuple, we need to remove the extra comma. if len(maker.children) == 4: n = maker.children[2] n.remove() maker.children[-1].prefix = n.prefix # Finally, replace the set call with our shiny new literal. return maker fixes/fix_map.py000066600000005735150501042300007663 0ustar00# Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Fixer that changes map(F, ...) into list(map(F, ...)) unless there exists a 'from future_builtins import map' statement in the top-level namespace. As a special case, map(None, X) is changed into list(X). (This is necessary because the semantics are changed in this case -- the new map(None, X) is equivalent to [(x,) for x in X].) We avoid the transformation (except for the special case mentioned above) if the map() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. NOTE: This is still not correct if the original code was depending on map(F, X, Y, ...) to go on until the longest argument is exhausted, substituting None for missing values -- like zip(), it now stops as soon as the shortest argument is exhausted. """ # Local imports from ..pgen2 import token from .. import fixer_base from ..fixer_util import Name, Call, ListComp, in_special_context from ..pygram import python_symbols as syms class FixMap(fixer_base.ConditionalFix): PATTERN = """ map_none=power< 'map' trailer< '(' arglist< 'None' ',' arg=any [','] > ')' > > | map_lambda=power< 'map' trailer< '(' arglist< lambdef< 'lambda' (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any > ',' it=any > ')' > > | power< 'map' trailer< '(' [arglist=any] ')' > > """ skip_on = 'future_builtins.map' def transform(self, node, results): if self.should_skip(node): return if node.parent.type == syms.simple_stmt: self.warning(node, "You should use a for loop here") new = node.clone() new.prefix = u"" new = Call(Name(u"list"), [new]) elif "map_lambda" in results: new = ListComp(results["xp"].clone(), results["fp"].clone(), results["it"].clone()) else: if "map_none" in results: new = results["arg"].clone() else: if "arglist" in results: args = results["arglist"] if args.type == syms.arglist and \ args.children[0].type == token.NAME and \ args.children[0].value == "None": self.warning(node, "cannot convert map(None, ...) " "with multiple arguments because map() " "now truncates to the shortest sequence") return if in_special_context(node): return None new = node.clone() new.prefix = u"" new = Call(Name(u"list"), [new]) new.prefix = node.prefix return new fixes/fix_exec.pyo000066600000002530150501042300010177 0ustar00 Lc@s_dZddklZddklZddklZlZlZdeifdYZ dS(sFixer for exec. This converts usages of the exec statement into calls to a built-in exec() function. exec code in ns1, ns2 -> exec(code, ns1, ns2) i(tpytree(t fixer_base(tCommatNametCalltFixExeccBseZdZdZRS(sx exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > | exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > cCs|i}|d}|id}|id}|ig}d|d_|dj o |it|ign|dj o |it|ignttd|d|iS(Ntatbtctiuexectprefix( tsymstgettcloneR tNonetextendRRR(tselftnodetresultsR RRRtargs((s./usr/lib64/python2.6/lib2to3/fixes/fix_exec.pyt transforms       (t__name__t __module__tPATTERNR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_exec.pyRsN( t__doc__R RRt fixer_utilRRRtBaseFixR(((s./usr/lib64/python2.6/lib2to3/fixes/fix_exec.pyt sfixes/fix_urllib.py000066600000016526150501042300010377 0ustar00"""Fix changes imports of urllib which are now incompatible. This is rather similar to fix_imports, but because of the more complex nature of the fixing for urllib, it has its own fixer. """ # Author: Nick Edds # Local imports from .fix_imports import alternates, FixImports from .. import fixer_base from ..fixer_util import Name, Comma, FromImport, Newline, attr_chain MAPPING = {'urllib': [ ('urllib.request', ['URLOpener', 'FancyURLOpener', 'urlretrieve', '_urlopener', 'urlopen', 'urlcleanup', 'pathname2url', 'url2pathname']), ('urllib.parse', ['quote', 'quote_plus', 'unquote', 'unquote_plus', 'urlencode', 'splitattr', 'splithost', 'splitnport', 'splitpasswd', 'splitport', 'splitquery', 'splittag', 'splittype', 'splituser', 'splitvalue', ]), ('urllib.error', ['ContentTooShortError'])], 'urllib2' : [ ('urllib.request', ['urlopen', 'install_opener', 'build_opener', 'Request', 'OpenerDirector', 'BaseHandler', 'HTTPDefaultErrorHandler', 'HTTPRedirectHandler', 'HTTPCookieProcessor', 'ProxyHandler', 'HTTPPasswordMgr', 'HTTPPasswordMgrWithDefaultRealm', 'AbstractBasicAuthHandler', 'HTTPBasicAuthHandler', 'ProxyBasicAuthHandler', 'AbstractDigestAuthHandler', 'HTTPDigestAuthHandler', 'ProxyDigestAuthHandler', 'HTTPHandler', 'HTTPSHandler', 'FileHandler', 'FTPHandler', 'CacheFTPHandler', 'UnknownHandler']), ('urllib.error', ['URLError', 'HTTPError']), ] } # Duplicate the url parsing functions for urllib2. MAPPING["urllib2"].append(MAPPING["urllib"][1]) def build_pattern(): bare = set() for old_module, changes in MAPPING.items(): for change in changes: new_module, members = change members = alternates(members) yield """import_name< 'import' (module=%r | dotted_as_names< any* module=%r any* >) > """ % (old_module, old_module) yield """import_from< 'from' mod_member=%r 'import' ( member=%s | import_as_name< member=%s 'as' any > | import_as_names< members=any* >) > """ % (old_module, members, members) yield """import_from< 'from' module_star=%r 'import' star='*' > """ % old_module yield """import_name< 'import' dotted_as_name< module_as=%r 'as' any > > """ % old_module # bare_with_attr has a special significance for FixImports.match(). yield """power< bare_with_attr=%r trailer< '.' member=%s > any* > """ % (old_module, members) class FixUrllib(FixImports): def build_pattern(self): return "|".join(build_pattern()) def transform_import(self, node, results): """Transform for the basic import case. Replaces the old import name with a comma separated list of its replacements. """ import_mod = results.get('module') pref = import_mod.prefix names = [] # create a Node list of the replacement modules for name in MAPPING[import_mod.value][:-1]: names.extend([Name(name[0], prefix=pref), Comma()]) names.append(Name(MAPPING[import_mod.value][-1][0], prefix=pref)) import_mod.replace(names) def transform_member(self, node, results): """Transform for imports of specific module elements. Replaces the module to be imported from with the appropriate new module. """ mod_member = results.get('mod_member') pref = mod_member.prefix member = results.get('member') # Simple case with only a single member being imported if member: # this may be a list of length one, or just a node if isinstance(member, list): member = member[0] new_name = None for change in MAPPING[mod_member.value]: if member.value in change[1]: new_name = change[0] break if new_name: mod_member.replace(Name(new_name, prefix=pref)) else: self.cannot_convert(node, 'This is an invalid module element') # Multiple members being imported else: # a dictionary for replacements, order matters modules = [] mod_dict = {} members = results.get('members') for member in members: member = member.value # we only care about the actual members if member != ',': for change in MAPPING[mod_member.value]: if member in change[1]: if change[0] in mod_dict: mod_dict[change[0]].append(member) else: mod_dict[change[0]] = [member] modules.append(change[0]) new_nodes = [] for module in modules: elts = mod_dict[module] names = [] for elt in elts[:-1]: names.extend([Name(elt, prefix=pref), Comma()]) names.append(Name(elts[-1], prefix=pref)) new_nodes.append(FromImport(module, names)) if new_nodes: nodes = [] for new_node in new_nodes[:-1]: nodes.extend([new_node, Newline()]) nodes.append(new_nodes[-1]) node.replace(nodes) else: self.cannot_convert(node, 'All module elements are invalid') def transform_dot(self, node, results): """Transform for calls to module members in code.""" module_dot = results.get('bare_with_attr') member = results.get('member') new_name = None if isinstance(member, list): member = member[0] for change in MAPPING[module_dot.value]: if member.value in change[1]: new_name = change[0] break if new_name: module_dot.replace(Name(new_name, prefix=module_dot.prefix)) else: self.cannot_convert(node, 'This is an invalid module element') def transform(self, node, results): if results.get('module'): self.transform_import(node, results) elif results.get('mod_member'): self.transform_member(node, results) elif results.get('bare_with_attr'): self.transform_dot(node, results) # Renaming and star imports are not supported for these modules. elif results.get('module_star'): self.cannot_convert(node, 'Cannot handle star imports.') elif results.get('module_as'): self.cannot_convert(node, 'This module is now multiple modules') fixes/fix_map.pyo000066600000005733150501042300010040 0ustar00 Lc@sudZddklZddklZddklZlZlZl Z ddk l Z dei fdYZdS( sFixer that changes map(F, ...) into list(map(F, ...)) unless there exists a 'from future_builtins import map' statement in the top-level namespace. As a special case, map(None, X) is changed into list(X). (This is necessary because the semantics are changed in this case -- the new map(None, X) is equivalent to [(x,) for x in X].) We avoid the transformation (except for the special case mentioned above) if the map() call is directly contained in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. NOTE: This is still not correct if the original code was depending on map(F, X, Y, ...) to go on until the longest argument is exhausted, substituting None for missing values -- like zip(), it now stops as soon as the shortest argument is exhausted. i(ttoken(t fixer_base(tNametCalltListComptin_special_context(tpython_symbolstFixMapcBseZdZdZdZRS(s map_none=power< 'map' trailer< '(' arglist< 'None' ',' arg=any [','] > ')' > > | map_lambda=power< 'map' trailer< '(' arglist< lambdef< 'lambda' (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any > ',' it=any > ')' > > | power< 'map' trailer< '(' [arglist=any] ')' > > sfuture_builtins.mapcCs|i|odS|iitijoA|i|d|i}d|_tt d|g}nd|jo4t |di|di|di}nd|jo|d i}nd |jog|d }|iti joF|i d it ijo,|i d id jo|i|d dSnt|odS|i}d|_tt d|g}|i|_|S(NsYou should use a for loop hereuulistt map_lambdatxptfptittmap_nonetargtarglistitNonesjcannot convert map(None, ...) with multiple arguments because map() now truncates to the shortest sequence(t should_skiptparentttypetsymst simple_stmttwarningtclonetprefixRRRRtchildrenRtNAMEtvalueRR(tselftnodetresultstnewtargs((s-/usr/lib64/python2.6/lib2to3/fixes/fix_map.pyt transform:s6            (t__name__t __module__tPATTERNtskip_onR (((s-/usr/lib64/python2.6/lib2to3/fixes/fix_map.pyRsN(t__doc__tpgen2RtRt fixer_utilRRRRtpygramRRtConditionalFixR(((s-/usr/lib64/python2.6/lib2to3/fixes/fix_map.pyts "fixes/fix_itertools_imports.py000066600000003460150501042300012700 0ustar00""" Fixer for imports of itertools.(imap|ifilter|izip|ifilterfalse) """ # Local imports from lib2to3 import fixer_base from lib2to3.fixer_util import BlankLine, syms, token class FixItertoolsImports(fixer_base.BaseFix): PATTERN = """ import_from< 'from' 'itertools' 'import' imports=any > """ %(locals()) def transform(self, node, results): imports = results['imports'] if imports.type == syms.import_as_name or not imports.children: children = [imports] else: children = imports.children for child in children[::2]: if child.type == token.NAME: member = child.value name_node = child else: assert child.type == syms.import_as_name name_node = child.children[0] member_name = name_node.value if member_name in (u'imap', u'izip', u'ifilter'): child.value = None child.remove() elif member_name == u'ifilterfalse': node.changed() name_node.value = u'filterfalse' # Make sure the import statement is still sane children = imports.children[:] or [imports] remove_comma = True for child in children: if remove_comma and child.type == token.COMMA: child.remove() else: remove_comma ^= True if children[-1].type == token.COMMA: children[-1].remove() # If there are no imports left, just get rid of the entire statement if not (imports.children or getattr(imports, 'value', None)) or \ imports.parent is None: p = node.prefix node = BlankLine() node.prefix = p return node fixes/fix_sys_exc.pyo000066600000003232150501042300010730 0ustar00 Lc@sgdZddklZddklZlZlZlZlZl Z l Z dei fdYZ dS(sFixer for sys.exc_{type, value, traceback} sys.exc_type -> sys.exc_info()[0] sys.exc_value -> sys.exc_info()[1] sys.exc_traceback -> sys.exc_info()[2] i(t fixer_base(tAttrtCalltNametNumbert SubscripttNodetsymst FixSysExccBs=eZdddgZddideDZdZRS(uexc_typeu exc_valueu exc_tracebacksN power< 'sys' trailer< dot='.' attribute=(%s) > > t|ccsx|]}d|VqWdS(s'%s'N((t.0te((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pys s cCs|dd}t|ii|i}ttdd|i}ttd|}|di|did_|i t |t t i |d|iS(Nt attributeiuexc_infotprefixusystdoti(Rtexc_infotindextvalueRRR RtchildrentappendRRRtpower(tselftnodetresultstsys_attrRtcalltattr((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pyt transforms(t__name__t __module__RtjointPATTERNR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pyRsN( t__doc__tRt fixer_utilRRRRRRRtBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_sys_exc.pyts4fixes/fix_types.pyo000066600000004207150501042300010422 0ustar00 Lc@sdZddklZddklZddklZhdd6dd6d d 6d d 6d d6d d6dd6dd6dd6dd6dd6dd6dd6dd6dd 6d!d"6d#d$6d%d&6d d'6d#d(6d)d*6ZgZeD]Z ed+e q[Z d,ei fd-YZ d.S(/sFixer for removing uses of the types module. These work for only the known names in the types module. The forms above can include types. or not. ie, It is assumed the module is imported either as: import types from types import ... # either * or specific types The import statements are not modified. There should be another fixer that handles at least the following constants: type([]) -> list type(()) -> tuple type('') -> str i(ttoken(t fixer_base(tNametboolt BooleanTypet memoryviewt BufferTypettypet ClassTypetcomplext ComplexTypetdicttDictTypetDictionaryTypestype(Ellipsis)t EllipsisTypetfloatt FloatTypetinttIntTypetlisttListTypetLongTypetobjectt ObjectTypes type(None)tNoneTypestype(NotImplemented)tNotImplementedTypetslicet SliceTypetbytest StringTypetstrt StringTypesttuplet TupleTypetTypeTypet UnicodeTypetranget XRangeTypes)power< 'types' trailer< '.' name='%s' > >tFixTypescBs eZdieZdZRS(t|cCs;tti|di}|ot|d|iSdS(Ntnametprefix(tunicodet _TYPE_MAPPINGtgettvalueRR)tNone(tselftnodetresultst new_value((s//usr/lib64/python2.6/lib2to3/fixes/fix_types.pyt transform:s(t__name__t __module__tjoint_patstPATTERNR3(((s//usr/lib64/python2.6/lib2to3/fixes/fix_types.pyR&6sN( t__doc__tpgen2RtRt fixer_utilRR+t_[1]ttR7tBaseFixR&(((s//usr/lib64/python2.6/lib2to3/fixes/fix_types.pyts6 %fixes/fix_set_literal.pyo000066600000003702150501042300011564 0ustar00 Lc@sOdZddklZlZddklZlZdeifdYZdS(s: Optional fixer to transform set() calls to set literals. i(t fixer_basetpytree(ttokentsymst FixSetLiteralcBseZeZdZdZRS(sjpower< 'set' trailer< '(' (atom=atom< '[' (items=listmaker< any ((',' any)* [',']) > | single=any) ']' > | atom< '(' items=testlist_gexp< any ((',' any)* [',']) > ')' > ) ')' > > c Cs|id}|o5titi|ig}|i||}n |d}titi dg}|i d|i D|i titi d|ii|d_titi|}|i|_t|i djo.|i d}|i|i|i d_n|S( Ntsingletitemsu{cssx|]}|iVqWdS(N(tclone(t.0tn((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pys &s u}iii(tgetRtNodeRt listmakerRtreplacetLeafRtLBRACEtextendtchildrentappendtRBRACEt next_siblingtprefixt dictsetmakertlentremove( tselftnodetresultsRtfakeRtliteraltmakerR ((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pyt transforms"      (t__name__t __module__tTruetexplicittPATTERNR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pyR s N( t__doc__tlib2to3RRtlib2to3.fixer_utilRRtBaseFixR(((s5/usr/lib64/python2.6/lib2to3/fixes/fix_set_literal.pytsfixes/fix_funcattrs.py000066600000001153150501042300011105 0ustar00"""Fix function attribute names (f.func_x -> f.__x__).""" # Author: Collin Winter # Local imports from .. import fixer_base from ..fixer_util import Name class FixFuncattrs(fixer_base.BaseFix): PATTERN = """ power< any+ trailer< '.' attr=('func_closure' | 'func_doc' | 'func_globals' | 'func_name' | 'func_defaults' | 'func_code' | 'func_dict') > any* > """ def transform(self, node, results): attr = results["attr"][0] attr.replace(Name((u"__%s__" % attr.value[5:]), prefix=attr.prefix)) fixes/fix_paren.pyc000066600000002762150501042300010353 0ustar00 Lc@sIdZddklZddklZlZdeifdYZdS(suFixer that addes parentheses where they are required This converts ``[x for x in 1, 2]`` to ``[x for x in (1, 2)]``.i(t fixer_base(tLParentRParentFixParencBseZdZdZRS(s atom< ('[' | '(') (listmaker< any comp_for< 'for' NAME 'in' target=testlist_safe< any (',' any)+ [','] > [any] > > | testlist_gexp< any comp_for< 'for' NAME 'in' target=testlist_safe< any (',' any)+ [','] > [any] > >) (']' | ')') > cCsL|d}t}|i|_d|_|id||itdS(Nttargetui(Rtprefixt insert_childt append_childR(tselftnodetresultsRtlparen((s//usr/lib64/python2.6/lib2to3/fixes/fix_paren.pyt transform#s     (t__name__t __module__tPATTERNR (((s//usr/lib64/python2.6/lib2to3/fixes/fix_paren.pyR sN(t__doc__tRt fixer_utilRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_paren.pytsfixes/fix_operator.pyc000066600000003341150501042300011073 0ustar00 Lc@sOdZddklZddklZlZlZdeifdYZdS(sFixer for operator.{isCallable,sequenceIncludes} operator.isCallable(obj) -> hasattr(obj, '__call__') operator.sequenceIncludes(obj) -> operator.contains(obj) i(t fixer_base(tCalltNametStringt FixOperatorcBs6eZdZdZdededeZdZRS(s(method=('isCallable'|'sequenceIncludes')s'(' func=any ')'s power< module='operator' trailer< '.' %(methods)s > trailer< %(func)s > > | power< %(methods)s trailer< %(func)s > > tmethodstfunccCs|dd}|idjo8d|jo|i|dqd|_|in|idjowd|jo|i|d|d iq|d }|itd td g}ttd |d |iSndS(NtmethodiusequenceIncludestmodules&You should use operator.contains here.ucontainsu isCallables,You should use hasattr(%s, '__call__') here.Ru, u '__call__'uhasattrtprefix(tvaluetwarningtchangedtcloneRRRR (tselftnodetresultsRRtargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_operator.pyt transforms     !(t__name__t __module__RRtdicttPATTERNR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_operator.pyR sN( t__doc__tRt fixer_utilRRRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_operator.pytsfixes/fix_throw.pyo000066600000003720150501042300010420 0ustar00 Lc@s{dZddklZddklZddklZddklZlZl Z l Z l Z dei fdYZ dS( sFixer for generator.throw(E, V, T). g.throw(E) -> g.throw(E) g.throw(E, V) -> g.throw(E(V)) g.throw(E, V, T) -> g.throw(E(V).with_traceback(T)) g.throw("foo"[, V[, T]]) will warn about string exceptions.i(tpytree(ttoken(t fixer_base(tNametCalltArgListtAttrtis_tupletFixThrowcBseZdZdZRS(s power< any trailer< '.' 'throw' > trailer< '(' args=arglist< exc=any ',' val=any [',' tb=any] > ')' > > | power< any trailer< '.' 'throw' > trailer< '(' exc=any ')' > > c Cs`|i}|di}|itijo|i|ddS|id}|djodS|i}t|o5g}|i dd!D]}||iq~}nd|_ |g}|d} d|jom|di} d| _ t ||} t | t d t| gg} | iti|i| n| it ||dS( Ntexcs+Python 3 does not support string exceptionsuvaliiutargsttbuwith_traceback(tsymstclonettypeRtSTRINGtcannot_converttgettNoneRtchildrentprefixRRRRtreplaceRtNodetpower( tselftnodetresultsR R tvalt_[1]tcR t throw_argsR tetwith_tb((s//usr/lib64/python2.6/lib2to3/fixes/fix_throw.pyt transforms*    5     % (t__name__t __module__tPATTERNR!(((s//usr/lib64/python2.6/lib2to3/fixes/fix_throw.pyRsN(t__doc__tRtpgen2RRt fixer_utilRRRRRtBaseFixR(((s//usr/lib64/python2.6/lib2to3/fixes/fix_throw.pyts (fixes/fix_unicode.pyc000066600000002222150501042300010663 0ustar00 Lc@srdZddkZddklZddklZhdd6dd 6Zeid Zd ei fd YZ dS( sJFixer that changes unicode to str, unichr to chr, and u"..." into "...". iNi(ttoken(t fixer_baseuchruunichrustruunicodeu[uU][rR]?[\'\"]t FixUnicodecBseZdZdZRS(sSTRING | 'unicode' | 'unichr'cCs|itijo!|i}t|i|_|S|itijo8ti|io!|i}|id|_|SndS(Ni( ttypeRtNAMEtclonet_mappingtvaluetSTRINGt _literal_retmatch(tselftnodetresultstnew((s1/usr/lib64/python2.6/lib2to3/fixes/fix_unicode.pyt transforms  (t__name__t __module__tPATTERNR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_unicode.pyR s( t__doc__tretpgen2RtRRtcompileR tBaseFixR(((s1/usr/lib64/python2.6/lib2to3/fixes/fix_unicode.pyts  fixes/fix_exitfunc.pyo000066600000005216150501042300011104 0ustar00 Lc@sgdZddklZlZddklZlZlZlZl Z l Z dei fdYZ dS(s7 Convert use of sys.exitfunc to use the atexit module. i(tpytreet fixer_base(tNametAttrtCalltCommatNewlinetsymst FixExitfunccBs)eZdZdZdZdZRS(s ( sys_import=import_name<'import' ('sys' | dotted_as_names< (any ',')* 'sys' (',' any)* > ) > | expr_stmt< power< 'sys' trailer< '.' 'exitfunc' > > '=' func=any > ) cGstt|i|dS(N(tsuperRt__init__(tselftargs((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyR scCs&tt|i||d|_dS(N(R Rt start_treetNonet sys_import(R ttreetfilename((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyR sc Csd|jo&|idjo|d|_ndS|di}d|_tititt dt d}t ||g|i}|i ||idjo|i |ddS|ii d}|itijo*|it|it ddn|ii}|i i|i}|i} titit d t ddg} titi| g} |i|dt|i|d | dS( NRtfuncuuatexituregistersKCan't find sys import; Please add an atexit import at the top of your file.iu uimporti(RRtclonetprefixRtNodeRtpowerRRRtreplacetwarningtchildrenttypetdotted_as_namest append_childRtparenttindext import_namet simple_stmtt insert_childR( R tnodetresultsRtregistertcalltnamestcontaining_stmttpositiontstmt_containert new_importtnew((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyt transform#s2       (t__name__t __module__tPATTERNR R R,(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyR s  N( t__doc__tlib2to3RRtlib2to3.fixer_utilRRRRRRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_exitfunc.pyts.fixes/fix_ws_comma.pyo000066600000002577150501042300011073 0ustar00 Lc@sSdZddklZddklZddklZdeifdYZdS(sFixer that changes 'a ,b' into 'a, b'. This also changes '{a :b}' into '{a: b}', but does not touch other uses of colons. It does not touch other uses of whitespace. i(tpytree(ttoken(t fixer_baset FixWsCommacBsSeZeZdZeieidZeiei dZ ee fZ dZ RS(sH any<(not(',') any)+ ',' ((not(',') any)+ ',')* [not(',') any]> u,u:cCs|i}t}x|iD]~}||ijo:|i}|iod|jo d|_nt}q|o!|i}|p d|_qnt}qW|S(Nu uu (tclonetFalsetchildrentSEPStprefixtisspacetTrue(tselftnodetresultstnewtcommatchildR((s2/usr/lib64/python2.6/lib2to3/fixes/fix_ws_comma.pyt transforms       ( t__name__t __module__R texplicittPATTERNRtLeafRtCOMMAtCOLONRR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_ws_comma.pyR s  N(t__doc__tRtpgen2RRtBaseFixR(((s2/usr/lib64/python2.6/lib2to3/fixes/fix_ws_comma.pytsfixes/__init__.pyc000066600000000205150501042300010125 0ustar00 Lc@sdS(N((((s./usr/lib64/python2.6/lib2to3/fixes/__init__.pyts__init__.pyo000066600000000177150501042300007033 0ustar00 Lc@sdS(N((((s(/usr/lib64/python2.6/lib2to3/__init__.pytsfixer_base.pyo000066600000015737150501042300007413 0ustar00 Lc@s~dZddkZddkZddklZddklZddklZde fdYZ d e fd YZ dS( s2Base class for fixers (optional, but recommended).iNi(tPatternCompiler(tpygram(tdoes_tree_importtBaseFixcBseZdZdZdZdZdZdZe i dZ e Z dZeZdZdZeiZdZdZdZdZdZd d Zd Zdd Zd ZdZdZ RS(sOptional base class for fixers. The subclass name must be FixFooBar where FooBar is the result of removing underscores and capitalizing the words of the fix name. For example, the class name for a fixer named 'has_key' should be FixHasKey. itposticCs ||_||_|idS(sInitializer. Subclass may override. Args: options: an dict containing the options passed to RefactoringTool that could be used to customize the fixer through the command line. log: a list to append warnings and other messages to. N(toptionstlogtcompile_pattern(tselfRR((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt__init__*s  cCs0|idj oti|i|_ndS(sCompiles self.PATTERN into self.pattern. Subclass may override if it doesn't want to use self.{pattern,PATTERN} in .match(). N(tPATTERNtNoneRRtpattern(R((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyR6scCs||_ti||_dS(smSet the filename, and a logger derived from it. The main refactoring tool should call this. N(tfilenametloggingt getLoggertlogger(RR ((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt set_filename?s cCs'h|d6}|ii||o|S(sReturns match for a given parse tree node. Should return a true or false object (not necessarily a bool). It may return a non-empty dict of matching sub-nodes as returned by a matching pattern. Subclass may override. tnode(R tmatch(RRtresults((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyRGs cCs tdS(sReturns the transformation for a given parse tree node. Args: node: the root of the parse tree that matched the fixer. results: a dict mapping symbolic names to part of the match. Returns: None, or a node that is a modified copy of the argument node. The node argument may also be modified in-place to effect the same change. Subclass *must* override. N(tNotImplementedError(RRR((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt transformSsuxxx_todo_changemecCsK|}x.||ijo|t|ii}q W|ii||S(sReturn a string suitable for use as an identifier The new name is guaranteed not to conflict with other identifiers. (t used_namestunicodetnumberstnexttadd(Rttemplatetname((s*/usr/lib64/python2.6/lib2to3/fixer_base.pytnew_namecs cCsB|io$t|_|iid|in|ii|dS(Ns### In file %s ###(t first_logtFalseRtappendR (Rtmessage((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt log_messagens  cCsZ|i}|i}d|_d}|i|||f|o|i|ndS(sWarn the user that a given chunk of code is not valid Python 3, but that it cannot be converted automatically. First argument is the top-level node for the code in question. Optional second argument is why it can't be converted. usLine %d: could not convert: %sN(t get_linenotclonetprefixR#(RRtreasontlinenot for_outputtmsg((s*/usr/lib64/python2.6/lib2to3/fixer_base.pytcannot_convertts   cCs'|i}|id||fdS(sUsed for warning the user about possible uncertainty in the translation. First argument is the top-level node for the code in question. Optional second argument is why it can't be converted. s Line %d: %sN(R$R#(RRR'R(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pytwarnings cCs8|i|_|i|tid|_t|_dS(sSome fixers need to maintain tree-wide state. This method is called once, at the start of tree fix-up. tree - the root node of the tree to be processed. filename - the name of the file the tree came from. iN(RRt itertoolstcountRtTrueR(RttreeR ((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt start_trees  cCsdS(sSome fixers need to maintain tree-wide state. This method is called once, at the conclusion of tree fix-up. tree - the root node of the tree to be processed. filename - the name of the file the tree came from. N((RR0R ((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt finish_treesN(!t__name__t __module__t__doc__R R R RR RR-R.RtsetRtorderR texplicitt run_ordert _accept_typeRtpython_symbolstsymsR RRRRRR#R+R,R1R2(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyRs0       tConditionalFixcBs&eZdZdZdZdZRS(s@ Base class for fixers which not execute if an import is found. cGs#tt|i|d|_dS(N(tsuperR=R1R t _should_skip(Rtargs((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyR1scCsc|idj o|iS|iid}|d}di|d }t||||_|iS(Nt.i(R?R tskip_ontsplittjoinR(RRtpkgR((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyt should_skips N(R3R4R5R RBR1RF(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyR=s ( R5RR-tpatcompRtRt fixer_utilRtobjectRR=(((s*/usr/lib64/python2.6/lib2to3/fixer_base.pyts  patcomp.py000066600000015467150501042300006570 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """Pattern compiler. The grammer is taken from PatternGrammar.txt. The compiler compiles a pattern to a pytree.*Pattern instance. """ __author__ = "Guido van Rossum " # Python imports import os # Fairly local imports from .pgen2 import driver, literals, token, tokenize, parse, grammar # Really local imports from . import pytree from . import pygram # The pattern grammar file _PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "PatternGrammar.txt") class PatternSyntaxError(Exception): pass def tokenize_wrapper(input): """Tokenizes a string suppressing significant whitespace.""" skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) tokens = tokenize.generate_tokens(driver.generate_lines(input).next) for quintuple in tokens: type, value, start, end, line_text = quintuple if type not in skip: yield quintuple class PatternCompiler(object): def __init__(self, grammar_file=_PATTERN_GRAMMAR_FILE): """Initializer. Takes an optional alternative filename for the pattern grammar. """ self.grammar = driver.load_grammar(grammar_file) self.syms = pygram.Symbols(self.grammar) self.pygrammar = pygram.python_grammar self.pysyms = pygram.python_symbols self.driver = driver.Driver(self.grammar, convert=pattern_convert) def compile_pattern(self, input, debug=False): """Compiles a pattern string to a nested pytree.*Pattern object.""" tokens = tokenize_wrapper(input) try: root = self.driver.parse_tokens(tokens, debug=debug) except parse.ParseError, e: raise PatternSyntaxError(str(e)) return self.compile_node(root) def compile_node(self, node): """Compiles a node, recursively. This is one big switch on the node type. """ # XXX Optimize certain Wildcard-containing-Wildcard patterns # that can be merged if node.type == self.syms.Matcher: node = node.children[0] # Avoid unneeded recursion if node.type == self.syms.Alternatives: # Skip the odd children since they are just '|' tokens alts = [self.compile_node(ch) for ch in node.children[::2]] if len(alts) == 1: return alts[0] p = pytree.WildcardPattern([[a] for a in alts], min=1, max=1) return p.optimize() if node.type == self.syms.Alternative: units = [self.compile_node(ch) for ch in node.children] if len(units) == 1: return units[0] p = pytree.WildcardPattern([units], min=1, max=1) return p.optimize() if node.type == self.syms.NegatedUnit: pattern = self.compile_basic(node.children[1:]) p = pytree.NegatedPattern(pattern) return p.optimize() assert node.type == self.syms.Unit name = None nodes = node.children if len(nodes) >= 3 and nodes[1].type == token.EQUAL: name = nodes[0].value nodes = nodes[2:] repeat = None if len(nodes) >= 2 and nodes[-1].type == self.syms.Repeater: repeat = nodes[-1] nodes = nodes[:-1] # Now we've reduced it to: STRING | NAME [Details] | (...) | [...] pattern = self.compile_basic(nodes, repeat) if repeat is not None: assert repeat.type == self.syms.Repeater children = repeat.children child = children[0] if child.type == token.STAR: min = 0 max = pytree.HUGE elif child.type == token.PLUS: min = 1 max = pytree.HUGE elif child.type == token.LBRACE: assert children[-1].type == token.RBRACE assert len(children) in (3, 5) min = max = self.get_int(children[1]) if len(children) == 5: max = self.get_int(children[3]) else: assert False if min != 1 or max != 1: pattern = pattern.optimize() pattern = pytree.WildcardPattern([[pattern]], min=min, max=max) if name is not None: pattern.name = name return pattern.optimize() def compile_basic(self, nodes, repeat=None): # Compile STRING | NAME [Details] | (...) | [...] assert len(nodes) >= 1 node = nodes[0] if node.type == token.STRING: value = unicode(literals.evalString(node.value)) return pytree.LeafPattern(_type_of_literal(value), value) elif node.type == token.NAME: value = node.value if value.isupper(): if value not in TOKEN_MAP: raise PatternSyntaxError("Invalid token: %r" % value) if nodes[1:]: raise PatternSyntaxError("Can't have details for token") return pytree.LeafPattern(TOKEN_MAP[value]) else: if value == "any": type = None elif not value.startswith("_"): type = getattr(self.pysyms, value, None) if type is None: raise PatternSyntaxError("Invalid symbol: %r" % value) if nodes[1:]: # Details present content = [self.compile_node(nodes[1].children[1])] else: content = None return pytree.NodePattern(type, content) elif node.value == "(": return self.compile_node(nodes[1]) elif node.value == "[": assert repeat is None subpattern = self.compile_node(nodes[1]) return pytree.WildcardPattern([[subpattern]], min=0, max=1) assert False, node def get_int(self, node): assert node.type == token.NUMBER return int(node.value) # Map named tokens to the type value for a LeafPattern TOKEN_MAP = {"NAME": token.NAME, "STRING": token.STRING, "NUMBER": token.NUMBER, "TOKEN": None} def _type_of_literal(value): if value[0].isalpha(): return token.NAME elif value in grammar.opmap: return grammar.opmap[value] else: return None def pattern_convert(grammar, raw_node_info): """Converts raw node information to a Node or Leaf instance.""" type, value, context, children = raw_node_info if children or type in grammar.number2symbol: return pytree.Node(type, children, context=context) else: return pytree.Leaf(type, value, context=context) def compile_pattern(pattern): return PatternCompiler().compile_pattern(pattern) pytree.pyo000066600000070346150501042300006611 0ustar00 Lc@sdZdZddkZddkZddklZdZhadZdefdYZ d e fd YZ d e fd YZ d Z defdYZ de fdYZde fdYZde fdYZde fdYZdZdS(s Python parse tree definitions. This is a very concrete parse tree; we need to keep every token and even the comments and whitespace between tokens. There's also a pattern matching implementation here. s#Guido van Rossum iN(tStringIOicCsltpUddkl}xB|iiD]-\}}t|tjo|t|\}}||jo%|djodS|ii|dSq(WdS(s The node immediately preceding the invocant in their parent's children list. If the invocant does not have a previous sibling, it is None. iiN(R(R*R:R)(RR;R>((s&/usr/lib64/python2.6/lib2to3/pytree.pyt prev_siblings   cCs"|i}|djodS|iS(s Return the string immediately following the invocant node. This is effectively equivalent to node.next_sibling.prefix uN(R?R*R"(Rtnext_sib((s&/usr/lib64/python2.6/lib2to3/pytree.pyt get_suffixs  iicCst|idS(Ntascii(tunicodetencode(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt__str__sN((ii(t__name__t __module__t__doc__R*RR(R)R'R9RRt__hash__RRRRRR#R$R4R8R.R<tpropertyR?R@RBtsyst version_infoRF(((s&/usr/lib64/python2.6/lib2to3/pytree.pyR "s0          tNodecBseZdZdddZdZdZeidjo eZ ndZ dZ dZ d Z d Zd ZeeeZd Zd ZdZRS(s+Concrete implementation for interior nodes.cCsS||_t||_x|iD]}||_q"W|dj o ||_ndS(s Initializer. Takes a type constant (a symbol number >= 256), a sequence of child nodes, and an optional context keyword argument. As a side effect, the parent pointers of the children are updated. N(RR&R)R(R*R"(RRR)tcontextR"R2((s&/usr/lib64/python2.6/lib2to3/pytree.pyt__init__s    cCs#d|iit|i|ifS(s)Return a canonical string representation.s %s(%s, %r)(RRGR RR)(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt__repr__s  cCsditt|iS(sk Return a pretty string representation. This reproduces the input source exactly. u(tjointmapRDR)(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt __unicode__siicCs"|i|if|i|ifjS(sCompare two nodes for equality.(RR)(RR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRscCs4t|ig}|iD]}||iq~S(s$Return a cloned (deep) copy of self.(RNRR)R(Rt_[1]R2((s&/usr/lib64/python2.6/lib2to3/pytree.pyRsccs9x-|iD]"}x|iD] }|VqWq W|VdS(s*Return a post-order iterator for the tree.N(R)R(RR>R7((s&/usr/lib64/python2.6/lib2to3/pytree.pyRs    ccs9|Vx-|iD]"}x|iD] }|Vq"WqWdS(s)Return a pre-order iterator for the tree.N(R)R(RR>R7((s&/usr/lib64/python2.6/lib2to3/pytree.pyRs   cCs|ipdS|idiS(sO The whitespace and comments preceding this node in the input. ti(R)R"(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyt_prefix_getter$s cCs"|io||id_ndS(Ni(R)R"(RR"((s&/usr/lib64/python2.6/lib2to3/pytree.pyt_prefix_setter,s cCs4||_d|i|_||i|<|idS(s Equivalent to 'node.children[i] = child'. This method also sets the child's parent attribute appropriately. N(R(R*R)R.(RR;R>((s&/usr/lib64/python2.6/lib2to3/pytree.pyt set_child2s  cCs*||_|ii|||idS(s Equivalent to 'node.children.insert(i, child)'. This method also sets the child's parent attribute appropriately. N(R(R)tinsertR.(RR;R>((s&/usr/lib64/python2.6/lib2to3/pytree.pyt insert_child<s cCs'||_|ii||idS(s Equivalent to 'node.children.append(child)'. This method also sets the child's parent attribute appropriately. N(R(R)R-R.(RR>((s&/usr/lib64/python2.6/lib2to3/pytree.pyt append_childEs N(ii(RGRHRIR*RPRQRTRLRMRFRRRRRWRXRKR"RYR[R\(((s&/usr/lib64/python2.6/lib2to3/pytree.pyRNs           R5cBseZdZdZdZdZd d dZdZdZ e i djo e Z ndZ dZd Zd Zd Zd ZeeeZRS(s'Concrete implementation for leaf nodes.RVicCs\|dj o|\|_\|_|_n||_||_|dj o ||_ndS(s Initializer. Takes a type constant (a token number < 256), a string value, and an optional context keyword argument. N(R*t_prefixR6tcolumnRtvalue(RRR_ROR"((s&/usr/lib64/python2.6/lib2to3/pytree.pyRPXs     cCsd|ii|i|ifS(s)Return a canonical string representation.s %s(%r, %r)(RRGRR_(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRQgs cCs|it|iS(sk Return a pretty string representation. This reproduces the input source exactly. (R"RDR_(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRTmsicCs"|i|if|i|ifjS(sCompare two nodes for equality.(RR_(RR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRxscCs+t|i|i|i|i|iffS(s$Return a cloned (deep) copy of self.(R5RR_R"R6R^(R((s&/usr/lib64/python2.6/lib2to3/pytree.pyR|sccs |VdS(s*Return a post-order iterator for the tree.N((R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRsccs |VdS(s)Return a pre-order iterator for the tree.N((R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRscCs|iS(sP The whitespace and comments preceding this token in the input. (R](R((s&/usr/lib64/python2.6/lib2to3/pytree.pyRWscCs|i||_dS(N(R.R](RR"((s&/usr/lib64/python2.6/lib2to3/pytree.pyRXs N(ii(RGRHRIR]R6R^R*RPRQRTRLRMRFRRRRRWRXRKR"(((s&/usr/lib64/python2.6/lib2to3/pytree.pyR5Os          cCsp|\}}}}|p||ijo0t|djo |dSt||d|St||d|SdS(s Convert raw node information to a Node or Leaf instance. This is passed to the parser driver which calls it whenever a reduction of a grammar rule produces a new complete node, so that the tree is build strictly bottom-up. iiRON(t number2symboltlenRNR5(tgrtraw_nodeRR_ROR)((s&/usr/lib64/python2.6/lib2to3/pytree.pytconverts  t BasePatterncBs\eZdZdZdZdZdZdZdZ ddZ ddZ dZ RS(s A pattern is a tree matching pattern. It looks for a specific node type (token or symbol), and optionally for a specific content. This is an abstract base class. There are three concrete subclasses: - LeafPattern matches a single leaf node; - NodePattern matches a single node (usually non-leaf); - WildcardPattern matches a sequence of nodes of variable length. cOs ti|S(s>Constructor that prevents BasePattern from being instantiated.(RR(RRR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRscCskt|i|i|ig}x$|o|ddjo |d=q!Wd|iiditt |fS(Nis%s(%s)s, ( R RtcontentR R*RRGRRRStrepr(RR((s&/usr/lib64/python2.6/lib2to3/pytree.pyRQs  cCs|S(s A subclass can define this as a hook for optimizations. Returns either self or another node with the same effect. ((R((s&/usr/lib64/python2.6/lib2to3/pytree.pytoptimizescCs|idj o|i|ijotS|idj oQd}|dj o h}n|i||ptS|o|i|qn|dj o|io|||i= 256). If the type is None this matches *any* single node (leaf or not), except if content is not None, in which it only matches non-leaf nodes that also match the content pattern. The content, if not None, must be a sequence of Patterns that must match the node's children exactly. If the content is given, the type must not be None. If a name is given, the matching node is stored in the results dict under that key. N( R*R&R:R%tWildcardPatternR,t wildcardsRRfR (RRRfR R;titem((s&/usr/lib64/python2.6/lib2to3/pytree.pyRP.s      cCs|iodx\t|i|iD]E\}}|t|ijo#|dj o|i|ntSq WtSt|it|ijotSx;t |i|iD]$\}}|i ||ptSqWtS(s Match the pattern's content to the node's children. This assumes the node type matches and self.content is not None. Returns True if it matches, False if not. If results is not None, it must be a dict which will be updated with the nodes matching named subpatterns. When returning False, the results dict may still be updated. N( RtRpRfR)RaR*RjR,R'tzipRm(RR7RktcRlt subpatternR>((s&/usr/lib64/python2.6/lib2to3/pytree.pyRiKs      N(RGRHR'RtR*RPRi(((s&/usr/lib64/python2.6/lib2to3/pytree.pyRr*sRscBsheZdZd ded dZdZd dZd dZdZ dZ dZ d Z RS( s A wildcard pattern can match zero or more nodes. This has all the flexibility needed to implement patterns like: .* .+ .? .{m,n} (a b c | d e | f) (...)* (...)+ (...)? (...){m,n} except it always uses non-greedy matching. icCs_|dj o*ttt|}x|D]}q)Wn||_||_||_||_dS(s Initializer. Args: content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] min: optinal minumum number of times to match, default 0 max: optional maximum number of times tro match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is equivalent to (a b c | d e | f g h); if content is None, this is equivalent to '.' in regular expression terms. The min and max parameters work as follows: min=0, max=maxint: .* min=1, max=maxint: .+ min=0, max=1: .? min=1, max=1: . If content is not None, replace the dot with the parenthesized list of alternatives, e.g. (a b c | d e | f g h)* N(R*ttupleRSRftmintmaxR (RRfRzR{R talt((s&/usr/lib64/python2.6/lib2to3/pytree.pyRPus    cCs@d}|idj oEt|idjo/t|iddjo|idd}n|idjo`|idjoP|idjotd|iS|dj o|i|ijo |iSn|idjoat|t oQ|idjoA|i|ijo.t |i|i|i|i|i|iS|S(s+Optimize certain stacked wildcard patterns.iiR N( R*RfRaRzR{RrR RhR%Rs(RRx((s&/usr/lib64/python2.6/lib2to3/pytree.pyRhs 0   #    cCs|i|g|S(s'Does this pattern exactly match a node?(Ro(RR7Rk((s&/usr/lib64/python2.6/lib2to3/pytree.pyRmscCs{xt|i|D]c\}}|t|joD|dj o2|i||iot|||i s"   hF V,=#pygram.pyc000066600000002353150501042300006555 0ustar00 Lc@sdZddkZddklZddklZddklZeiieii e dZ de fd YZ eie Ze eZeiZeid =dS( s&Export the Python grammar and symbols.iNi(ttoken(tdriver(tpytrees Grammar.txttSymbolscBseZdZRS(cCs4x-|iiD]\}}t|||qWdS(sInitializer. Creates an attribute for each grammar symbol (nonterminal), whose value is the symbol's type (an int >= 256). N(t symbol2numbert iteritemstsetattr(tselftgrammartnametsymbol((s&/usr/lib64/python2.6/lib2to3/pygram.pyt__init__s (t__name__t __module__R (((s&/usr/lib64/python2.6/lib2to3/pygram.pyRstprint(t__doc__tostpgen2RRtRtpathtjointdirnamet__file__t _GRAMMAR_FILEtobjectRt load_grammartpython_grammartpython_symbolstcopyt!python_grammar_no_print_statementtkeywords(((s&/usr/lib64/python2.6/lib2to3/pygram.pyts !   fixer_util.pyc000066600000034063150501042300007433 0ustar00 Lc@sdZddklZddklZlZddklZddk l Z dZ dZ dZ d Zd1d Zd Zd Zd Ze e dZd1d1dZdZdZd1dZdZd1dZd1dZdZdZdZdZe ddddddd d!d"g Z!d#Z"d$a#d%a$d&a%e&a'd'Z(d(Z)d)Z*d*Z+d+Z,d,Z-d-Z.e ei/ei0gZ1d1d.Z2e ei0ei/ei3gZ4d/Z5d1d0Z6d1S(2s1Utility functions, node construction macros, etc.i(ttoken(tLeaftNode(tpython_symbols(tpatcompcCs%tti|ttid|gS(Nu=(RtsymstargumentRRtEQUAL(tkeywordtvalue((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt KeywordArgs cCsttidS(Nu((RRtLPAR(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytLParenscCsttidS(Nu)(RRtRPAR(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytRParenscCspt|tp |g}nt|tpd|_|g}ntti|ttidddg|S(sBuild an assignment statementu u=tprefix( t isinstancetlistRRRtatomRRR(ttargettsource((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytAssigns    cCstti|d|S(sReturn a NAME leafR(RRtNAME(tnameR((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytName$scCs|ttit|ggS(sA node tuple for obj.attr(RRttrailertDot(tobjtattr((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytAttr(scCsttidS(s A comma leafu,(RRtCOMMA(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytComma,scCsttidS(sA period (.) leafu.(RRtDOT(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyR0scCsOtti|i|ig}|o |idtti|n|S(s-A parenthesised argument list, used by Call()i(RRRtclonet insert_childtarglist(targstlparentrparentnode((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytArgList4s$ cCs<tti|t|g}|dj o ||_n|S(sA function callN(RRtpowerR(tNoneR(t func_nameR$RR'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytCall;s  cCsttidS(sA newline literalu (RRtNEWLINE(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytNewlineBscCsttidS(s A blank lineu(RRR-(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt BlankLineFscCstti|d|S(NR(RRtNUMBER(tnR((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytNumberJscCs1ttittid|ttidgS(sA numeric or string subscriptu[u](RRRRRtLBRACEtRBRACE(t index_node((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt SubscriptMscCstti|d|S(s A string leafR(RRtSTRING(tstringR((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytStringSsc Csd|_d|_d|_ttid}d|_ttid}d|_||||g}|oGd|_ttid}d|_|itti||gntti|tti |g}tti tti d|tti dgS(suA list comprehension of the form [xp for fp in it if test]. If test is None, the "if test" part is omitted. uu uforuinuifu[u]( RRRRtappendRRtcomp_ift listmakertcomp_forRR3R4( txptfptitttesttfor_leaftin_leaft inner_argstif_leaftinner((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytListCompWs$       #$ c Csx|D]}|iqWttidtti|ddttidddtti|g}tti|}|S(sO Return an import statement in the form: from package import name_leafsufromRu uimport(tremoveRRRRRtimport_as_namest import_from(t package_namet name_leafstleaftchildrentimp((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt FromImportoscCst|to!|ittgjotSt|tot|idjopt|idtoYt|idtoBt|idto+|ididjo|ididjS(s(Does the node represent a tuple literal?iiiiu(u)( RRRNR RtTruetlenRR (R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_tuples,cCst|toot|idjoYt|idtoBt|idto+|ididjo|ididjS(s'Does the node represent a list literal?iiiu[u](RRRRRNRR (R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_lists cCsttit|tgS(N(RRRR R(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt parenthesizestsortedRtsettanytallttupletsumtmintmaxccs6t||}x |o|Vt||}qWdS(slFollow an attribute chain. If you have a chain of objects where a.foo -> b, b.foo-> c, etc, use this to iterate over all objects in the chain. Iteration is terminated by getattr(x, attr) is None. Args: obj: the starting object attr: the name of the chaining attribute Yields: Each successive object in the chain. N(tgetattr(RRtnext((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt attr_chains sefor_stmt< 'for' any 'in' node=any ':' any* > | comp_for< 'for' any 'in' node=any any* > s power< ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | 'any' | 'all' | (any* trailer< '.' 'join' >) ) trailer< '(' node=any ')' > any* > sN power< 'sorted' trailer< '(' arglist ')' > any* > cCstp7titatitatitatantttg}xUt|t|dD];\}}h}|i ||o|d|jotSqfWt S(s Returns true if node is in an environment where all that is required of it is being itterable (ie, it doesn't matter if it returns a list or an itterator). See test_map_nochange in test_fixers.py for some examples and tests. tparentR'( t pats_builtRtcompile_patterntp1tp0tp2RQtzipR`tmatchtFalse(R'tpatternstpatternRatresults((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytin_special_contexts  $ cCs|i}|dj o|itijotS|i}|ititi fjotS|iti jo|i d|jotS|iti jpG|iti jo9|dj o|itijp|i d|jotStS(sG Check that something isn't an attribute or function name etc. iN(t prev_siblingR*ttypeRR RiRaRtfuncdeftclassdeft expr_stmtRNt parameterst typedargslistRRQ(R'tprevRa((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_probably_builtins   ' cCsY|itijo|S|i}|id}|_tti|g}||_|S(N(RoRtsuiteR!RaR*R(R'RaRw((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt make_suites  cCs<x5|itijo!|ip td|i}qW|S(sFind the top level namespace.s<Tree is insane! root found before file_input node was found.(RoRt file_inputRatAssertionError(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt find_roots cCs"t|t||}t|S(s Returns true if name is imported from package at the top level of the tree which node belongs to. To cover the case of an import like 'import foo', use None for the package and 'foo' for the name. (t find_bindingR{tbool(tpackageRR'tbinding((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytdoes_tree_importscCs|ititifjS(s0Returns true if the node is an import statement.(RoRt import_nameRJ(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt is_importsc Csd}t|}t|||odSd}}xrt|iD]a\}}||pqGnx3t|i|D]\}}||pPq{q{W||}PqGW|djojxgt|iD]R\}}|itijo3|io)|iditijo|d}PqqWn|djo:t ti t ti dt ti |ddg} n%t|t ti |ddg} | tg} |i|t ti| dS(s\ Works like `does_tree_import` but adds an import statement if it was not imported. cSs.|itijo|iot|idS(Ni(RoRt simple_stmtRNR(R'((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytis_import_stmtsNiiuimportRu (R{Rt enumerateRNRoRRRR7R*RRRRRPR.R"( R~RR'Rtroott insert_postoffsettidxtnode2timport_RN((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyt touch_imports:               "$c Csoxh|iD]]}d}|itijoPt||ido|St|t|id|}|o |}q>n|ititi fjo4t|t|id|}|o |}q>nv|iti jot|t|id|}|o |}q>x/t |idD]g\}}|it i joH|idjo8t|t|i|d|}|o |}qq q Wn|itjo!|idi|jo |}nt|||o |}nb|itijot|||}n9|itijo%t||ido |}q>n|o"|p|St|o|Sq q WdS( s Returns the node which binds variable name, otherwise None. If optional argument package is supplied, only imports will be returned. See test cases for examples.iiiit:iiN(RNR*RoRtfor_stmtt_findR|Rxtif_stmtt while_stmtttry_stmtRRtCOLONR t _def_symst_is_import_bindingRRrR(RR'R~tchildtretR1titkid((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyR|HsL   ##'    cCs|g}xt|ol|i}|idjo$|itjo|i|iq |itijo|i|jo|Sq WdS(Ni( tpopRot _block_symstextendRNRRR R*(RR'tnodes((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyRss   # cCsQ|itijo| o |id}|itijosx|iD]a}|itijo |idi|jo|SqE|itijo|i|jo|SqEWqM|itijo9|id}|itijo|i|jo|SqM|itijo|i|jo|Sn(|iti jo|o%t |idi |jodS|id}|ot d|odS|itijot ||o|S|itijo9|id}|itijo|i|jo|SqM|itijo|i|jo|S|o|itijo|SndS(s Will reuturn node if node will import name, or node will import * from package. None is returned otherwise. See test cases for examples. iiiiuasN(RoRRRNtdotted_as_namestdotted_as_nameR RRRJtunicodetstripR*RRItimport_as_nametSTAR(R'RR~RORtlastR1((s*/usr/lib64/python2.6/lib2to3/fixer_util.pyR}sB   #  # # ' # # # N(7t__doc__tpgen2RtpytreeRRtpygramRRtRR R RRR*RRRRR(R,R.R/R2R6R9RGRPRSRTRURWtconsuming_callsR`ReRdRfRiRbRmRvRxR{RRRRqRpRR|RRRR(((s*/usr/lib64/python2.6/lib2to3/fixer_util.pytsV                      - * pygram.pyo000066600000002353150501042300006571 0ustar00 Lc@sdZddkZddklZddklZddklZeiieii e dZ de fd YZ eie Ze eZeiZeid =dS( s&Export the Python grammar and symbols.iNi(ttoken(tdriver(tpytrees Grammar.txttSymbolscBseZdZRS(cCs4x-|iiD]\}}t|||qWdS(sInitializer. Creates an attribute for each grammar symbol (nonterminal), whose value is the symbol's type (an int >= 256). N(t symbol2numbert iteritemstsetattr(tselftgrammartnametsymbol((s&/usr/lib64/python2.6/lib2to3/pygram.pyt__init__s (t__name__t __module__R (((s&/usr/lib64/python2.6/lib2to3/pygram.pyRstprint(t__doc__tostpgen2RRtRtpathtjointdirnamet__file__t _GRAMMAR_FILEtobjectRt load_grammartpython_grammartpython_symbolstcopyt!python_grammar_no_print_statementtkeywords(((s&/usr/lib64/python2.6/lib2to3/pygram.pyts !   pytree.py000066600000066713150501042300006435 0ustar00# Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. """ Python parse tree definitions. This is a very concrete parse tree; we need to keep every token and even the comments and whitespace between tokens. There's also a pattern matching implementation here. """ __author__ = "Guido van Rossum " import sys import warnings from StringIO import StringIO HUGE = 0x7FFFFFFF # maximum repeat count, default max _type_reprs = {} def type_repr(type_num): global _type_reprs if not _type_reprs: from .pygram import python_symbols # printing tokens is possible but not as useful # from .pgen2 import token // token.__dict__.items(): for name, val in python_symbols.__dict__.items(): if type(val) == int: _type_reprs[val] = name return _type_reprs.setdefault(type_num, type_num) class Base(object): """ Abstract base class for Node and Leaf. This provides some default functionality and boilerplate using the template pattern. A node may be a subnode of at most one parent. """ # Default values for instance variables type = None # int: token number (< 256) or symbol number (>= 256) parent = None # Parent node pointer, or None children = () # Tuple of subnodes was_changed = False def __new__(cls, *args, **kwds): """Constructor that prevents Base from being instantiated.""" assert cls is not Base, "Cannot instantiate Base" return object.__new__(cls) def __eq__(self, other): """ Compare two nodes for equality. This calls the method _eq(). """ if self.__class__ is not other.__class__: return NotImplemented return self._eq(other) __hash__ = None # For Py3 compatibility. def __ne__(self, other): """ Compare two nodes for inequality. This calls the method _eq(). """ if self.__class__ is not other.__class__: return NotImplemented return not self._eq(other) def _eq(self, other): """ Compare two nodes for equality. This is called by __eq__ and __ne__. It is only called if the two nodes have the same type. This must be implemented by the concrete subclass. Nodes should be considered equal if they have the same structure, ignoring the prefix string and other context information. """ raise NotImplementedError def clone(self): """ Return a cloned (deep) copy of self. This must be implemented by the concrete subclass. """ raise NotImplementedError def post_order(self): """ Return a post-order iterator for the tree. This must be implemented by the concrete subclass. """ raise NotImplementedError def pre_order(self): """ Return a pre-order iterator for the tree. This must be implemented by the concrete subclass. """ raise NotImplementedError def set_prefix(self, prefix): """ Set the prefix for the node (see Leaf class). DEPRECATED; use the prefix property directly. """ warnings.warn("set_prefix() is deprecated; use the prefix property", DeprecationWarning, stacklevel=2) self.prefix = prefix def get_prefix(self): """ Return the prefix for the node (see Leaf class). DEPRECATED; use the prefix property directly. """ warnings.warn("get_prefix() is deprecated; use the prefix property", DeprecationWarning, stacklevel=2) return self.prefix def replace(self, new): """Replace this node with a new one in the parent.""" assert self.parent is not None, str(self) assert new is not None if not isinstance(new, list): new = [new] l_children = [] found = False for ch in self.parent.children: if ch is self: assert not found, (self.parent.children, self, new) if new is not None: l_children.extend(new) found = True else: l_children.append(ch) assert found, (self.children, self, new) self.parent.changed() self.parent.children = l_children for x in new: x.parent = self.parent self.parent = None def get_lineno(self): """Return the line number which generated the invocant node.""" node = self while not isinstance(node, Leaf): if not node.children: return node = node.children[0] return node.lineno def changed(self): if self.parent: self.parent.changed() self.was_changed = True def remove(self): """ Remove the node from the tree. Returns the position of the node in its parent's children before it was removed. """ if self.parent: for i, node in enumerate(self.parent.children): if node is self: self.parent.changed() del self.parent.children[i] self.parent = None return i @property def next_sibling(self): """ The node immediately following the invocant in their parent's children list. If the invocant does not have a next sibling, it is None """ if self.parent is None: return None # Can't use index(); we need to test by identity for i, child in enumerate(self.parent.children): if child is self: try: return self.parent.children[i+1] except IndexError: return None @property def prev_sibling(self): """ The node immediately preceding the invocant in their parent's children list. If the invocant does not have a previous sibling, it is None. """ if self.parent is None: return None # Can't use index(); we need to test by identity for i, child in enumerate(self.parent.children): if child is self: if i == 0: return None return self.parent.children[i-1] def get_suffix(self): """ Return the string immediately following the invocant node. This is effectively equivalent to node.next_sibling.prefix """ next_sib = self.next_sibling if next_sib is None: return u"" return next_sib.prefix if sys.version_info < (3, 0): def __str__(self): return unicode(self).encode("ascii") class Node(Base): """Concrete implementation for interior nodes.""" def __init__(self, type, children, context=None, prefix=None): """ Initializer. Takes a type constant (a symbol number >= 256), a sequence of child nodes, and an optional context keyword argument. As a side effect, the parent pointers of the children are updated. """ assert type >= 256, type self.type = type self.children = list(children) for ch in self.children: assert ch.parent is None, repr(ch) ch.parent = self if prefix is not None: self.prefix = prefix def __repr__(self): """Return a canonical string representation.""" return "%s(%s, %r)" % (self.__class__.__name__, type_repr(self.type), self.children) def __unicode__(self): """ Return a pretty string representation. This reproduces the input source exactly. """ return u"".join(map(unicode, self.children)) if sys.version_info > (3, 0): __str__ = __unicode__ def _eq(self, other): """Compare two nodes for equality.""" return (self.type, self.children) == (other.type, other.children) def clone(self): """Return a cloned (deep) copy of self.""" return Node(self.type, [ch.clone() for ch in self.children]) def post_order(self): """Return a post-order iterator for the tree.""" for child in self.children: for node in child.post_order(): yield node yield self def pre_order(self): """Return a pre-order iterator for the tree.""" yield self for child in self.children: for node in child.post_order(): yield node def _prefix_getter(self): """ The whitespace and comments preceding this node in the input. """ if not self.children: return "" return self.children[0].prefix def _prefix_setter(self, prefix): if self.children: self.children[0].prefix = prefix prefix = property(_prefix_getter, _prefix_setter) def set_child(self, i, child): """ Equivalent to 'node.children[i] = child'. This method also sets the child's parent attribute appropriately. """ child.parent = self self.children[i].parent = None self.children[i] = child self.changed() def insert_child(self, i, child): """ Equivalent to 'node.children.insert(i, child)'. This method also sets the child's parent attribute appropriately. """ child.parent = self self.children.insert(i, child) self.changed() def append_child(self, child): """ Equivalent to 'node.children.append(child)'. This method also sets the child's parent attribute appropriately. """ child.parent = self self.children.append(child) self.changed() class Leaf(Base): """Concrete implementation for leaf nodes.""" # Default values for instance variables _prefix = "" # Whitespace and comments preceding this token in the input lineno = 0 # Line where this token starts in the input column = 0 # Column where this token tarts in the input def __init__(self, type, value, context=None, prefix=None): """ Initializer. Takes a type constant (a token number < 256), a string value, and an optional context keyword argument. """ assert 0 <= type < 256, type if context is not None: self._prefix, (self.lineno, self.column) = context self.type = type self.value = value if prefix is not None: self._prefix = prefix def __repr__(self): """Return a canonical string representation.""" return "%s(%r, %r)" % (self.__class__.__name__, self.type, self.value) def __unicode__(self): """ Return a pretty string representation. This reproduces the input source exactly. """ return self.prefix + unicode(self.value) if sys.version_info > (3, 0): __str__ = __unicode__ def _eq(self, other): """Compare two nodes for equality.""" return (self.type, self.value) == (other.type, other.value) def clone(self): """Return a cloned (deep) copy of self.""" return Leaf(self.type, self.value, (self.prefix, (self.lineno, self.column))) def post_order(self): """Return a post-order iterator for the tree.""" yield self def pre_order(self): """Return a pre-order iterator for the tree.""" yield self def _prefix_getter(self): """ The whitespace and comments preceding this token in the input. """ return self._prefix def _prefix_setter(self, prefix): self.changed() self._prefix = prefix prefix = property(_prefix_getter, _prefix_setter) def convert(gr, raw_node): """ Convert raw node information to a Node or Leaf instance. This is passed to the parser driver which calls it whenever a reduction of a grammar rule produces a new complete node, so that the tree is build strictly bottom-up. """ type, value, context, children = raw_node if children or type in gr.number2symbol: # If there's exactly one child, return that child instead of # creating a new node. if len(children) == 1: return children[0] return Node(type, children, context=context) else: return Leaf(type, value, context=context) class BasePattern(object): """ A pattern is a tree matching pattern. It looks for a specific node type (token or symbol), and optionally for a specific content. This is an abstract base class. There are three concrete subclasses: - LeafPattern matches a single leaf node; - NodePattern matches a single node (usually non-leaf); - WildcardPattern matches a sequence of nodes of variable length. """ # Defaults for instance variables type = None # Node type (token if < 256, symbol if >= 256) content = None # Optional content matching pattern name = None # Optional name used to store match in results dict def __new__(cls, *args, **kwds): """Constructor that prevents BasePattern from being instantiated.""" assert cls is not BasePattern, "Cannot instantiate BasePattern" return object.__new__(cls) def __repr__(self): args = [type_repr(self.type), self.content, self.name] while args and args[-1] is None: del args[-1] return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args))) def optimize(self): """ A subclass can define this as a hook for optimizations. Returns either self or another node with the same effect. """ return self def match(self, node, results=None): """ Does this pattern exactly match a node? Returns True if it matches, False if not. If results is not None, it must be a dict which will be updated with the nodes matching named subpatterns. Default implementation for non-wildcard patterns. """ if self.type is not None and node.type != self.type: return False if self.content is not None: r = None if results is not None: r = {} if not self._submatch(node, r): return False if r: results.update(r) if results is not None and self.name: results[self.name] = node return True def match_seq(self, nodes, results=None): """ Does this pattern exactly match a sequence of nodes? Default implementation for non-wildcard patterns. """ if len(nodes) != 1: return False return self.match(nodes[0], results) def generate_matches(self, nodes): """ Generator yielding all matches for this pattern. Default implementation for non-wildcard patterns. """ r = {} if nodes and self.match(nodes[0], r): yield 1, r class LeafPattern(BasePattern): def __init__(self, type=None, content=None, name=None): """ Initializer. Takes optional type, content, and name. The type, if given must be a token type (< 256). If not given, this matches any *leaf* node; the content may still be required. The content, if given, must be a string. If a name is given, the matching node is stored in the results dict under that key. """ if type is not None: assert 0 <= type < 256, type if content is not None: assert isinstance(content, basestring), repr(content) self.type = type self.content = content self.name = name def match(self, node, results=None): """Override match() to insist on a leaf node.""" if not isinstance(node, Leaf): return False return BasePattern.match(self, node, results) def _submatch(self, node, results=None): """ Match the pattern's content to the node's children. This assumes the node type matches and self.content is not None. Returns True if it matches, False if not. If results is not None, it must be a dict which will be updated with the nodes matching named subpatterns. When returning False, the results dict may still be updated. """ return self.content == node.value class NodePattern(BasePattern): wildcards = False def __init__(self, type=None, content=None, name=None): """ Initializer. Takes optional type, content, and name. The type, if given, must be a symbol type (>= 256). If the type is None this matches *any* single node (leaf or not), except if content is not None, in which it only matches non-leaf nodes that also match the content pattern. The content, if not None, must be a sequence of Patterns that must match the node's children exactly. If the content is given, the type must not be None. If a name is given, the matching node is stored in the results dict under that key. """ if type is not None: assert type >= 256, type if content is not None: assert not isinstance(content, basestring), repr(content) content = list(content) for i, item in enumerate(content): assert isinstance(item, BasePattern), (i, item) if isinstance(item, WildcardPattern): self.wildcards = True self.type = type self.content = content self.name = name def _submatch(self, node, results=None): """ Match the pattern's content to the node's children. This assumes the node type matches and self.content is not None. Returns True if it matches, False if not. If results is not None, it must be a dict which will be updated with the nodes matching named subpatterns. When returning False, the results dict may still be updated. """ if self.wildcards: for c, r in generate_matches(self.content, node.children): if c == len(node.children): if results is not None: results.update(r) return True return False if len(self.content) != len(node.children): return False for subpattern, child in zip(self.content, node.children): if not subpattern.match(child, results): return False return True class WildcardPattern(BasePattern): """ A wildcard pattern can match zero or more nodes. This has all the flexibility needed to implement patterns like: .* .+ .? .{m,n} (a b c | d e | f) (...)* (...)+ (...)? (...){m,n} except it always uses non-greedy matching. """ def __init__(self, content=None, min=0, max=HUGE, name=None): """ Initializer. Args: content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] min: optinal minumum number of times to match, default 0 max: optional maximum number of times tro match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is equivalent to (a b c | d e | f g h); if content is None, this is equivalent to '.' in regular expression terms. The min and max parameters work as follows: min=0, max=maxint: .* min=1, max=maxint: .+ min=0, max=1: .? min=1, max=1: . If content is not None, replace the dot with the parenthesized list of alternatives, e.g. (a b c | d e | f g h)* """ assert 0 <= min <= max <= HUGE, (min, max) if content is not None: content = tuple(map(tuple, content)) # Protect against alterations # Check sanity of alternatives assert len(content), repr(content) # Can't have zero alternatives for alt in content: assert len(alt), repr(alt) # Can have empty alternatives self.content = content self.min = min self.max = max self.name = name def optimize(self): """Optimize certain stacked wildcard patterns.""" subpattern = None if (self.content is not None and len(self.content) == 1 and len(self.content[0]) == 1): subpattern = self.content[0][0] if self.min == 1 and self.max == 1: if self.content is None: return NodePattern(name=self.name) if subpattern is not None and self.name == subpattern.name: return subpattern.optimize() if (self.min <= 1 and isinstance(subpattern, WildcardPattern) and subpattern.min <= 1 and self.name == subpattern.name): return WildcardPattern(subpattern.content, self.min*subpattern.min, self.max*subpattern.max, subpattern.name) return self def match(self, node, results=None): """Does this pattern exactly match a node?""" return self.match_seq([node], results) def match_seq(self, nodes, results=None): """Does this pattern exactly match a sequence of nodes?""" for c, r in self.generate_matches(nodes): if c == len(nodes): if results is not None: results.update(r) if self.name: results[self.name] = list(nodes) return True return False def generate_matches(self, nodes): """ Generator yielding matches for a sequence of nodes. Args: nodes: sequence of nodes Yields: (count, results) tuples where: count: the match comprises nodes[:count]; results: dict containing named submatches. """ if self.content is None: # Shortcut for special case (see __init__.__doc__) for count in xrange(self.min, 1 + min(len(nodes), self.max)): r = {} if self.name: r[self.name] = nodes[:count] yield count, r elif self.name == "bare_name": yield self._bare_name_matches(nodes) else: # The reason for this is that hitting the recursion limit usually # results in some ugly messages about how RuntimeErrors are being # ignored. save_stderr = sys.stderr sys.stderr = StringIO() try: for count, r in self._recursive_matches(nodes, 0): if self.name: r[self.name] = nodes[:count] yield count, r except RuntimeError: # We fall back to the iterative pattern matching scheme if the recursive # scheme hits the recursion limit. for count, r in self._iterative_matches(nodes): if self.name: r[self.name] = nodes[:count] yield count, r finally: sys.stderr = save_stderr def _iterative_matches(self, nodes): """Helper to iteratively yield the matches.""" nodelen = len(nodes) if 0 >= self.min: yield 0, {} results = [] # generate matches that use just one alt from self.content for alt in self.content: for c, r in generate_matches(alt, nodes): yield c, r results.append((c, r)) # for each match, iterate down the nodes while results: new_results = [] for c0, r0 in results: # stop if the entire set of nodes has been matched if c0 < nodelen and c0 <= self.max: for alt in self.content: for c1, r1 in generate_matches(alt, nodes[c0:]): if c1 > 0: r = {} r.update(r0) r.update(r1) yield c0 + c1, r new_results.append((c0 + c1, r)) results = new_results def _bare_name_matches(self, nodes): """Special optimized matcher for bare_name.""" count = 0 r = {} done = False max = len(nodes) while not done and count < max: done = True for leaf in self.content: if leaf[0].match(nodes[count], r): count += 1 done = False break r[self.name] = nodes[:count] return count, r def _recursive_matches(self, nodes, count): """Helper to recursively yield the matches.""" assert self.content is not None if count >= self.min: yield 0, {} if count < self.max: for alt in self.content: for c0, r0 in generate_matches(alt, nodes): for c1, r1 in self._recursive_matches(nodes[c0:], count+1): r = {} r.update(r0) r.update(r1) yield c0 + c1, r class NegatedPattern(BasePattern): def __init__(self, content=None): """ Initializer. The argument is either a pattern or None. If it is None, this only matches an empty sequence (effectively '$' in regex lingo). If it is not None, this matches whenever the argument pattern doesn't have any matches. """ if content is not None: assert isinstance(content, BasePattern), repr(content) self.content = content def match(self, node): # We never match a node in its entirety return False def match_seq(self, nodes): # We only match an empty sequence of nodes in its entirety return len(nodes) == 0 def generate_matches(self, nodes): if self.content is None: # Return a match if there is an empty sequence if len(nodes) == 0: yield 0, {} else: # Return a match if the argument pattern has no matches for c, r in self.content.generate_matches(nodes): return yield 0, {} def generate_matches(patterns, nodes): """ Generator yielding matches for a sequence of patterns and nodes. Args: patterns: a sequence of patterns nodes: a sequence of nodes Yields: (count, results) tuples where: count: the entire sequence of patterns matches nodes[:count]; results: dict containing named submatches. """ if not patterns: yield 0, {} else: p, rest = patterns[0], patterns[1:] for c0, r0 in p.generate_matches(nodes): if not rest: yield c0, r0 else: for c1, r1 in generate_matches(rest, nodes[c0:]): r = {} r.update(r0) r.update(r1) yield c0 + c1, r __init__.pyc000066600000000177150501042300007017 0ustar00 Lc@sdS(N((((s(/usr/lib64/python2.6/lib2to3/__init__.pyts