- for all TIPA characters the tipa and tipx package must always be registered because these characters are also allowed outside a TIPA inset
- include \textvertline because we again reached the maximum number of if statements allowed by MSCV (so safe at least one for now)
The LyX file format only supports one CJK font mapping per document.
Therefore all CJK environments which use a different mapping than the main
one are output in ERT. THis is now explained in the test case, and also the
converion to ERT is improved (although it is still wrong e.g. for paragraphs).
- support for \textdoublevertline, \textvertline, \textglobfall
- support \!o and \!b and \!d and \!g and \!G and \!j
- support and \*k and \*r and \*t and \*w
- register the package "tipa" if \textipa is detected in an equation
- add a \textipa equation to the testfile
(I failed to implement support for the TIPA character "\t*{ }" and the command "\=*".)
- Use the LyX name of encodings instead of the LaTeX names.
The LyX name must be unique, while the name used by LaTeX
not necessarily, e.g. different packages might implement
support for the same encoding.
- Rename koi8 to koi8-r, so that the LyX and LaTeX names match.
- Rename euc-jp-plain to euc-jp-platex, jis-plain to jis-platex
and shift-jis-plain to shift-jis-platex.
- Add utf8-platex encoding (fixes#8408).
LyX file format incremented to 463.
- support for the combining diacritical marks
- support for \texttoptiebar and \textbottomtiebar
- test-insets.lyx: add complete testcase
- TODO.txt: update what still needs to be done
This is achieved by not calling Parse::tokenize_one() anymore in
Parser::good(): The status of the input can be tested without performing the
actual tokenizing. Now there are only two methods that may prevent an encoding
change:next_token() and next_next_token().
This is an addendum to [72a44b3c/lyxgit] because depending on the environment, LyX adds a \phantomsection before \addcontentsline.
- also update the test file
This is an important part of the tests. If updating the test cases is really a
problem there are two better solutions than not testing the format: Convert
the references with lyx2lyx on the fly, or remove the hard coupling of tex2lyx
and LyX versions again. Both increase additional possible error sources, but
these errors could at least be detected. If the test machinery is made blind
for versions, file format errors are impossible to detect.
Jean-Marc discovered a possible data loss caused by Parser::getChar().
This is now fixed, and Parser::getChar() is removed, since it is no longer
needed and easy to use it in the wrong way.
- nothing needs to be done for feyn.sty - it is already recognized as simple feature and the roundtrip using the feynman example file works
- update entry for undertilde to be uniform
When switching catcodes (for example to verbatim), there are in general some tokens that have already been parsed according to the old catcodes. It is therefore needed to "forget" those token and reparse them.
The basic idea is to use idocstream::putback() to feed the characters back to the stream, but this does not work (probably because we disable buffering). Therefore, we implement a simple stream-like class that implements putback().
This is a refactoring commit, which may introduce minor regressions, but should go in the right direction.
* introduce Parser::verbatimEnvironment, which is a wrapper around verbatimStuff
* introduce output_ert, which inserts a string as ERT; this allows to factor out several code instances.
* rename handle_ert to output_ert_inset: this creates an inset and then calls output_ert
* remove handle_comment (use handle_ert instead)
* remove handle_backaspace (unused now)
* use the new methods in handle_listing
TODO:
* use for other verbatim-like cases (Sweave chunk, url...)
* maybe implement Parser::unparse to reparse tokens already parsed with the wrong catcodes (if needed)
- Implement catcode setting in Parser
- add a new Parser::verbatimStuff method that reads verbatim contents
- use this method to parse "verbatim" environment.
- use it to parse \verb too.
- rename Parser::verbatimEnvironment to ertEnvironment.
TODO:
- use for other verbatim-like cases (Sweave chunk, lstlisting...)
- factor out the function that outputs ERT (including line breaks)
- maybe implement Parser::unparse (if needed)
Provide functions for translating to the LyX name
of an encoding from either a LaTeX name or an Iconv
name, with the possibility to specify the package.
This is in anticipation of changing to use the LyX
name of the encoding in the .lyx file format and
allowing multiple lib/encodings entries to have
the same LaTeX name (but different packages!).
The tex2lyx parser needs to worry about the iconv
name of the input encoding, so store that instead
of the latex name.
The old layouts are still there (marked as deprecated). The new ones are more or less correctly reverted (polishment required), but the old ones not yet converted to the new. Once this is done, a further file format change should be made.
These encodings were not defined, since they must not be used as document
encodings (the characters {, } and \ may appear in high bytes, and latex
would be confused). However, they are supported by CJK.sty (which uses a
preprocessor to circumvent the limitations of the latex executable). These
encodings are now defined, but used for import in tex2lyx only.
The test case CJK.tex contained fake tests for shift-jis and big5 (the
japanese and chinese characters were entered using the utf8 encoding), and
therefore the wrong interpretation of these encoding looked as if it worked.
The comments about missing iconv support of shift-jis and big5 were wrong as
well (otherwise shift-jis-plain would not work either).
Actually, the test case showed several problems:
- ERT insets did use layout "Standard", not "Plain Layout"
- The font scale was read correctly, but tex2lyx claimed that it did ignore
the option "scaled=0.95"
- If a third argument of the CJK environment was given, it caused the whole
environment to be put in ERT with a broken encoding. This is now fixed for
the bug test case by using the \font_cjk header variable, but the encoding
problem still exists for unsupported encodings. I'll file a separate bug
for that.
- The CJKutf8 package was not handled in the preamble parsing. Therefore the
chinese comment in the preamble was read with a wrong encoding, and guessing
the document language did not work.
The new file CJKutf8.tex was created by copying and modifying CJK.tex, but
unfortunately it is impossible to tell git to inherit the history of CJK.tex
for the new file (search the web for git svn copy if you want to know details).
Special characters as created by latex_path() where not converted corectly
from LaTeX macros by tex2lyx. Now this is done, even for file names containing
double quotes which are not used for quoting spaces. These file names are not
legal on windows, and will causes probelms in DVI files, but if they occur
tex2lyx does not produce invalid .lyx files.
The fix is basically mechanical, the additional code for fraction like insets
with three arguments was stolen from \unitfrac. As any math package,
stackrel.sty needs a buffer parameter to switch it off.
I also added the two stackrel flavours to the toolbar.
Both problems where caused by the fact that tex2lyx did not handle
natbib/jurabib citations correctly if natbib/jurabib was loaded by the
document class. Therefore it tried to parse the standard \cite syntax, and
did not recognize \citet and \citep.
The \frametitle command is less convenient to use than the \frame argument, but it provides more options (overlay/action and short title). We thus provide this additionally to the option, like beamer itself does.
This has a list-like structure (with \onslide item commands). The previous implementation was rather useless, since it required lots of ERT. Since the new implementation is so different, we use ERT for conersion/reversion.
The lyx2lyx routines are not yet perfect, though.
The stmaryrd package adds support for lots of math symbols, using a font
designed to accompany the computer modern fonts. The changes in detail:
- Fix generate_symbols_list.py to work with stmaryrd.sty. It loooks like it
was automatically translated from a perl version and never used.
- Generate the new symbols in lib/symbols using generate_symbols_list.py and
add some manual adjustments
- Generate stmary10.ttf by a simple ttf export from stmary10.sfd with fontforge
- Add license info for stmary10.ttf
- Create a test file with all symbols from stmaryrd.sty. Actually it would be
nice to have this for the other fonts as well.
- The mechanics: lyx2lyx, tex2lyx, font machinery etc.
This patch puts all projects into subfolders (at least for MSVS). In this
way, there is a better overview (especially if the number of test projects
will be increasing).
Now that we have module support for literate programming, it is possible to do a noweb cleanup. This is basically a patch from Kayvan Sylvan:
- get rid of literate-xxx classes
- rename Scrap to Chunk, since this is the name noweb doc uses (Scrap is from nuweb)
- update lyx file format and add lyx2lyx support for gettting rid of literate-xxx classes
- update documentation
On the top of it, update tex2lyx to
- avoid creating files with literate-xxx class
- fix conflict between parsing << as a quote and parsing it as a Chunk
- create Chunk layouts instead of Scrap ones.
Actually tex2lyx can handle modules since some time (#5702), but not
theorems (#5776). Now the following issues are fixed:
- Modules that depend on other modules can be loaded, since the dependencies
are loaded first
- Default moduls of the text class are loaded correctly
- \newtheorem is recognized as a command that defines new environments and
treated similar to \newenvironment
With this new command line switch a list of modules can be loaded
unconditionally. This seems to be needed for the literate programming formats,
and it is useful to work around bug #5702 as well.
The listings inset does automatically load the color package if any parameter
contains \color. As mentioned in bug #8066 tex2lyx needs to be aware of this,
so the \usepckage{color} is automatically skipped in these cases.
Creating LYX_DATE dynamically at configure-time caused unwanted recompilation
of the whole directories (src + src/tex2lyx) because all the relevant objects
were dependent of a common file (flags.make in case of "Unix makefiles") which changed
accordingly.
There is now a new include (lyx_date.h, with only one definition)
Nothing changes for automake, since in this case LYX_DATE is defined in config.h
s is a valid horizontal position for framebox (as well as for makebox, which
was already parsed correctly). There was even a test case, but with a wrong
reference.
Instead of annoying the user with an automatically created note in the output
document which she needs to delete manually, determine the document language
automatically for documents that use CJK. This is done using a heuristic which
roughly counts the number of characters in each language and sets the one that
is used most often. This is not perfect, but it works for the two major use
cases: A document with only some CJK parts (in this case the babel language is
used), and a document which is mainly written in one CJK language. It is only
a minor problem if the heuristic is wrong, since the TeX export is still
correct, and there is no spell checking support for CJK anyway.
Now all regression tests do pass except for some relative path issues
depending on the location of the build directory.