writer2latex surrounds quotes by braces which we skip to avoid useless ERT.
I broke this in 25fe87e5 which made Parser::next_next_token() not recognize
that it needed to parse one more token. If we had a unit test for the Parser
class it would probably have detected that. Now we have at least a test for
the special quote.
If we call tex2lyx on a temporary file created from the clipboard, the
file is always in utf8 encoding, without any temporary changes, even if it
contains encoding changing LaTeX commands. Therefore, we must tell tex2lyx
to use a fixed utf8 encoding for the whole file, and this is done using the
new latexclipboard format. Previously, tex2lyx thought the encoding was
latin1.
As a side effect, the -e option is now also documented in the man page.
I added it because of a misunderstanding. If the encoding is really set too
late, to_utf8() in the Token constructor would probably fail, but a non-empty
putback buffer is no problem, since it was always created via to_utf8() ->
from_utf8().
This is achieved by not calling Parse::tokenize_one() anymore in
Parser::good(): The status of the input can be tested without performing the
actual tokenizing. Now there are only two methods that may prevent an encoding
change:next_token() and next_next_token().
Jean-Marc discovered a possible data loss caused by Parser::getChar().
This is now fixed, and Parser::getChar() is removed, since it is no longer
needed and easy to use it in the wrong way.
When switching catcodes (for example to verbatim), there are in general some tokens that have already been parsed according to the old catcodes. It is therefore needed to "forget" those token and reparse them.
The basic idea is to use idocstream::putback() to feed the characters back to the stream, but this does not work (probably because we disable buffering). Therefore, we implement a simple stream-like class that implements putback().
This is a refactoring commit, which may introduce minor regressions, but should go in the right direction.
* introduce Parser::verbatimEnvironment, which is a wrapper around verbatimStuff
* introduce output_ert, which inserts a string as ERT; this allows to factor out several code instances.
* rename handle_ert to output_ert_inset: this creates an inset and then calls output_ert
* remove handle_comment (use handle_ert instead)
* remove handle_backaspace (unused now)
* use the new methods in handle_listing
TODO:
* use for other verbatim-like cases (Sweave chunk, url...)
* maybe implement Parser::unparse to reparse tokens already parsed with the wrong catcodes (if needed)
- Implement catcode setting in Parser
- add a new Parser::verbatimStuff method that reads verbatim contents
- use this method to parse "verbatim" environment.
- use it to parse \verb too.
- rename Parser::verbatimEnvironment to ertEnvironment.
TODO:
- use for other verbatim-like cases (Sweave chunk, lstlisting...)
- factor out the function that outputs ERT (including line breaks)
- maybe implement Parser::unparse (if needed)
Provide functions for translating to the LyX name
of an encoding from either a LaTeX name or an Iconv
name, with the possibility to specify the package.
This is in anticipation of changing to use the LyX
name of the encoding in the .lyx file format and
allowing multiple lib/encodings entries to have
the same LaTeX name (but different packages!).
The tex2lyx parser needs to worry about the iconv
name of the input encoding, so store that instead
of the latex name.
These encodings were not defined, since they must not be used as document
encodings (the characters {, } and \ may appear in high bytes, and latex
would be confused). However, they are supported by CJK.sty (which uses a
preprocessor to circumvent the limitations of the latex executable). These
encodings are now defined, but used for import in tex2lyx only.
The test case CJK.tex contained fake tests for shift-jis and big5 (the
japanese and chinese characters were entered using the utf8 encoding), and
therefore the wrong interpretation of these encoding looked as if it worked.
The comments about missing iconv support of shift-jis and big5 were wrong as
well (otherwise shift-jis-plain would not work either).
Commit 7cfac95 got rid of empty lines that were created by removing \usepackage
statements. However, it added an additional newline in case the \usepackage
was not at the end of the line. This is now fixed.
The old fix was incomplete (\verb~\~ was translated to \verb~~ in roundtrip).
The real cause for this bug (and also the mistranslation of \href{...}{\}})
was the misbehaviour of Token::character() (see comment in Parser.h): This
method even returns a character if the category is catEscape, and this is not
wanted in most (all?) cases.
- Parser.cpp: \verb can have any character as delimiter (except of ASCII letters) not only '+', therefore partly revert [3943b887/lyxgit] and fix it for all cases
- Parser.cpp: - new function to parse verbatim environments
- test/test-structure.tex: updated example
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@40850 a592a061-630c-0410-9148-cb99ea01b6c8
If we want to look at the token after the next token, it may be needed
to call tokenize_one() twice and not only once as done in good().
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@40234 a592a061-630c-0410-9148-cb99ea01b6c8
If an optional argument can be followed by another one, we need to use
getFullOpt(). Otherwise it would not be possible to parse \foo[][bar].
Remove getOptContent to avoid confusion (it does exactly the same as
getArg('[', ']') which is used in most places.
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@37354 a592a061-630c-0410-9148-cb99ea01b6c8
parser discards all needed packages like calc.sty or framed.sty, so the
resulting document would be uncompilable with those boxes as ERT.
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@37287 a592a061-630c-0410-9148-cb99ea01b6c8
misinterpreting "\}" as "}" when it occured inside a pair of unescaped
braces, like in "\code{@\{*\}r||p\{1in\}@\{*\}}".
The reason for this bug is that Token::character() behaves differently in
tex2lyx than in mathed. See the comment in Parser.h for a more general fix.
For now I played on the safe side and only changed those places where I
definitely know that the old code was wrong.
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@37117 a592a061-630c-0410-9148-cb99ea01b6c8
Improve heuristic for outputting \bibliographystyle: Now it is suppressed
if \bibliography follows immediately, since LyX adds it automatically in that
case.
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@37066 a592a061-630c-0410-9148-cb99ea01b6c8
- Replace special verbatim commands by standard LaTeX, since it would be
extremely difficult to make tex2lyx understand them)
- Comment duplicated \bibliography{xampl}, since LaTeX cannot handle two
\bibliography calls
- Fix a regression with spaces after commands, introduced in r36943
- Parse \multicolumn with space/comments between two arguments correctly
- Parse optional arguments correctly if there are space or comments between
the command and the argument
- Remove duplicate "LyX" phrase handling (I overlooked that in r37052)
- Add new commands created with \let to the list of known commands. This is
needed to parse the arguments correctly
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@37064 a592a061-630c-0410-9148-cb99ea01b6c8
- Make test-insets.tex and test-structure.tex compilable
- Avoid duplicate definition of \lyxarrow in test-insets.lyx
- Prevent subscript package from being ignored in test-insets.lyx
- Prevent commands listed with optional arg in syntax.default from being
concatenated with the next word if no optional arg is given
- Handle spaces and comments inbetween a command an "{}" consistently
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@36943 a592a061-630c-0410-9148-cb99ea01b6c8
- correct the conversion of InsetCommand
- code optimization
- support for format 253: nomenclature is now recognized
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@30093 a592a061-630c-0410-9148-cb99ea01b6c8
Make several Parser methods return copies of tokens, because the
vector containing the token may get reallocated when it grows.
Revert now useless changeset 29557.
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@29627 a592a061-630c-0410-9148-cb99ea01b6c8
This is overridden by any subsequent inputenc command.
The current encoding is also passed down to \input or \included files.
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@28295 a592a061-630c-0410-9148-cb99ea01b6c8
What works:
- parsing of inputenc should work
- \inputencoding is acted on in the preamble
What does not work:
- \inputencoding in the text
- all the corner cases I have not considered, and all buggy stuff in the
'what works' paragraph
- InsetLatexAccent are still created, but I do not know when they got added
to the code.
The only notable trick in the code is that I had to disable buffering. Otherwise
the whole text was read before I had a chance to change the encoding...
Finally I remove the artificial limitation that forbid
\usepackage[opt1,opt2]{package1,package2}
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@27592 a592a061-630c-0410-9148-cb99ea01b6c8
read utf8 tex documents and translate them to lyxformat 249.
There is still no code to discover the encoding and use it, but it is the
easiest part (I hope).
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@27563 a592a061-630c-0410-9148-cb99ea01b6c8
Instead, new token are read when requested. The idea from now is
to be able to chenge encoding on the fly.
Now it is time to see whether idocstream actually works...
git-svn-id: svn://svn.lyx.org/lyx/lyx-devel/trunk@27489 a592a061-630c-0410-9148-cb99ea01b6c8