As this required to first generate the paragraph before outputting it if necessary, tests like XMLStream::isTagOpen no more worked properly. This also refactors table handling to get rid of that case (and make code easier to read).
Includes a test case useful for some of the previous commits (notes in abstract, PI escaping, counter warnings).
Still missing: marginal and side notes. Shouldn't they be ported to InsetMarginal?
* invert failing lyx2lyx tests for ko/Welcome
* add dedicated test sample
* set language for English text part in ko/Welcome.
Also
* fix a lyx2lyx language test sample
* fix clause in unreliableTests
This happens with "inputenc: auto-legacy" if a language with default
encoding "utf8" (e.g. Turkmen or Mongolian) is used in a Quote
(or another environment).
Simplify user preamble.
Use common test document for Xe- and LuaTeX with polyglossia
and special one for languages only supported by XeTeX.
Update tagging patterns and comments.
Allow use for font MonomakhUnicode. The font is available in texlive
and making a symbolic link in ~/.fonts/fonts to point to the appropriate
directory makes the font available to the system too.
LyX follows LaTeX in dropping support for this combination
(it only worked by tricking "inputenc.sty").
There is no known case where this combination is required or helpfull.
For power users with special needs, XeTeX + TeX fonts is still
available after setting the input encoding to "ascii" or "utf8-plain".
See also #10600.
Following the suggestion in the Babel-Azerbaijani documentation,
we use the glyphs from the Cyrillic fonts for the Latin
text character. This fits better than IPA fonts (assuming there are matching
Latin and Cyrillic fonts specified) and also provides bold etc.
Encoding cp858 supported by only some iconv variants.
Most users will want to change their "encoding" setting instead
of installing/recompiling "iconv" to support this legacy encoding.
ctests are likely will fail with either "vanilla" or "enhanced"
iconv and test a situation that is unlikely to change generally,
so we ignore this test now by default.
Separate xetex-inputenc test sample in working and non-working parts.
Sort HTML-only tests.
Update tagging and ignore-rules.
Change inputencoding to utf8 in dedicated tests (get pdf4_texF working).
Thai works fine with LuaTeX, TeX-fonts and auto-legacy input encoding.
Remove obsolete preamble code,
we now load "fontenc" with Japanese documents by default.
* do not ignore Japanese (platex) with system fonts.
* CJK can be used with XeTeX and TeX-fonts if the input encoding is utf8.
do not ignore.
* TODO: set non-TeX fonts and uninvert where possible.
Fixes wrong and missing characters in text parts in other languages
(platex does not support "inputenc").
Fixes compilation errors due to desynchronized encoding switches.
* some Japanese (platex) documents fail with inputenc "utf8-platex"
(missing characters in non-Japanese text parts), because the
Unicodechar definitions from "inputenc" are not used.
* some Japanes (platex) documents show wrong output with "auto",
because platex ignores the encoding switch for text parts
in other languages.
* Japanese Beamer documents must set default output to "pdf",
because dvipdfm(x) produces wrong output with document class "Beamer".
* update tagging/inverting rules.
* use HE8 font encoding for Hebrew in language test.
While HE8 provides more characters and prevents use of bitmap fonts,
forcing its use may break older installations.
The dedicated test file 012_hebrew_he_HE8.lyx provides an
example for use of HE8 encoded fonts with babel-hebrew.
The "nikud" (vowel) signs, shindot, and shindot are combining Unicode
characters. However, LaTeX-Hebrew expects them as postfix characters, not
accent macros (cf. www.cs.tau.ac.il/~stoledo/Bib/Pubs/vowels.pdf).
Xe/LuaTeX convert \AA to the deprecated character u212B (which is missing
in the default LatinModern font) instead of the recommended u00C5.
Also fix some of the "missing character" errors in Math.lyx if compiled with
Xe/LuaTeX which were caused by the replacement of \AA with literal u212B characters
in math-insets due to the old definitions in unicodesymbols.
Update the minimal example for failures of Math.lyx with system fonts.
New bug in TeXLive 18.
Missing characters with XeTeX and wrong characters with LuaTeX.
Also:
* Remove spurious (Latin) characters from uk/Intro.lyx
* "wrong-output" tag for Cyrillic documents with XeTeX and TeX fonts.
Prevents wrong or missing characters with LuaTeX and 8-bit fonts.
Also "uninvert" the corresponding test case and two other
no longer failing "unicodesymbols" exports.
If Document>Settings>Language>Encoding is set to any value except "auto" or "default", we
expect the whole document to use this encoding. Wiht encodings from the CJK package, this means
one big "CJK" environment and no encoding switches.
Characters that are not handled by the CJK package need to be "forced" in lib/unicodesymbols.
This is completed for "euc-cn", the others will follow.
A \clearpage command issued right before \end{CJK} is recommended by the
package author to prevent any un-processed CJK chars outside the
\begin{CJK} and \end{CJK} scope. Otherwise, TOC, header, footer,
and may contain CJK chars but get processed outside the CJK environment scope.
Tha new dedicated export test fails without the fix.
The textcomp Unicode support file "ts1enc.dfu" defines 0x204E Low Asterisk
as \textasteriskcentered. LyX should follow suit.
The ASTERISK OPERATOR (correctly) maps to the same macro,
the "deprecated" tag marks the upstream mapping as preferred choice.
The Thai tis620-0 input encoding is supported via the inputenc "plug in"
(data) file tis620.def from https://ctan.org/pkg/babel-thai.
We can handle it like the other contributed input encodings, e.g.,
Greek (ISO 8859-7) and the several Cyrillic encodings from
http://www.ctan.org/pkg/latex-cyrillic.
Under TeXLive 2018, the input encoding defaults to utf8, if there is no call to
inputenc. The added test file fails without the patch but compiles fine, if the
file "tis620.def" is present in the TEXPATH.
utf8-plain (Unicode (utf8 XeTeX)) is a power-user setting
for the input encoding with two use cases:
a) setup of system fonts or
b) setup of input encoding supportuser preamble
in the document class or user preamble.
The test file is an example for use case b.
Don't insert empty line when translating QuoteInsets to literal
quotes.
Fix regexp pattern in re/convert_dashligatures.
Adjust logic in re/convert_dash(ligatur)es.
* use unicode.transform() instead of loop over replacements
* telling variable names
* remove trailing whitespace
* documentation update
* don't set use_ligature_dashes if both dash types are found
* remove spurious warning, normalize indentation, and use
Python idioms in revert_baselineskip()
iconv fails, if a nomenclature inset contains an uncodable character
This led to failure of the indonesian UserGuide in the attic.
Fix it there and add a minimal, specific test sample instead.
Encoding cp858 is only supported by some iconv variants
Gnu iconv only supports it, if configured with "--enable-extra-encodings"
(see https://www.gnu.org/software/libiconv/)
Maybe drop support or add a configuration check?
The xfrac package is the "state of the art" for "split-level" (nice) fractions.
Character replacements look consistent, scale properly and fit in the line.
Fixes#5220.
As Enrico pointed out in http://www.mail-archive.com/lyx-devel@lists.lyx.org/msg131411.html, the loading error of testcases_speed.lyx is caused by tags from a LyX development version that were later removed but never handled in lyx2lyx.
LaTeX export still fails with
! Argument of \xargs@grab@opt has an extra }.
Test unicodesymbols for most supported input encodings with Kornel's addition to ctests.
Add required "forces" to unicodesymbols:
* utf8x does not support all characters supported by LyX
* several 8-bit encodings map characters to math-mode commands - force replacement in text-mode so that LyX can wrap them in \\ensuremath.
Fix a misalignment (wrong replacements) in the Cyrillic Unicode block.
Use \\mathscr for Mathematical Script characters in Mathematical Alphanumeric Characters (in line with the characters in other unicode blocks.
Rename the directory for test samples "export/latex/Unicode-characters" to "export/latex/unicodesymbols". This matches the purpose to test the lib/unicodesymbols file.
First run of Kornels patch for tests with all input encodings in lib/encodings.
Remove redundant sample files - keep only one sample and change the input encoding in the test script.
Put remaining failing test in "unreliableTests" for later sorting...
Move them to a subdir, ignore this subdir for other tests.
Dedicated test samples for LaTeX-specific problems don't give additional value if tested for loading, conversion, or other exports.
This led to errors when compiling with polyglossia (and non-TeX fonts).
A minimal (currently non-compiling) test sample is kept in autotests/export/
and inverted in suspiciousTests.
As of 0b1cf133 we now warn in the GUI of this issue, but there is a
discussion about whether we should change our LaTeX output and allow
for the workflow of mixing inTitle layouts. For more information,
see #10347.
XeTeX with TeX fonts is only safe with ASCII input encoding (see #9740)
and we therefore force "ascii" when exporting with XeTeX and 8-bit TeX-fonts.
However, "utf8-plain" is a "power-user" option, which allows to switch off LyX's
encoding of the LaTeX file:
keep this also for "XeTeX with TeX fonts".
The user is responsible to ensure all characters can be processed and are
correctly shown in the output. The provided test sample shows the problems
with this encoding without special measures (like loading fontspec in the
user-preamble or a document class).
We test here, if any \end_{somethig} matches \begin_{something}.
Exeptions are \end_index and \end_branch ATM, they should
match \index and \branch respectively.
Also added new testfile.
To omit the error, there are 2 possibilities
1.) Change \inputencoding to utf8-platex
or
2.) write some text in default language prior to the following
subchapter 1.2.2
Probably a candidate for language nesting
This works around a limitation of the test machinery, which never switches
TeX fonts on for format that need that, it only switches TeX fonts off for
formats needing it.
Thanks to Kornel we do now have the infrastructure for running dedicated
export tests. This is the first one, showing a language nesting bug which is
already in 2.1. It is inverted for now, but this will hopefully change soon.