Prevents wrong or missing characters with LuaTeX and 8-bit fonts.
Also "uninvert" the corresponding test case and two other
no longer failing "unicodesymbols" exports.
If Document>Settings>Language>Encoding is set to any value except "auto" or "default", we
expect the whole document to use this encoding. Wiht encodings from the CJK package, this means
one big "CJK" environment and no encoding switches.
Characters that are not handled by the CJK package need to be "forced" in lib/unicodesymbols.
This is completed for "euc-cn", the others will follow.
A \clearpage command issued right before \end{CJK} is recommended by the
package author to prevent any un-processed CJK chars outside the
\begin{CJK} and \end{CJK} scope. Otherwise, TOC, header, footer,
and may contain CJK chars but get processed outside the CJK environment scope.
Tha new dedicated export test fails without the fix.
The textcomp Unicode support file "ts1enc.dfu" defines 0x204E Low Asterisk
as \textasteriskcentered. LyX should follow suit.
The ASTERISK OPERATOR (correctly) maps to the same macro,
the "deprecated" tag marks the upstream mapping as preferred choice.
The Thai tis620-0 input encoding is supported via the inputenc "plug in"
(data) file tis620.def from https://ctan.org/pkg/babel-thai.
We can handle it like the other contributed input encodings, e.g.,
Greek (ISO 8859-7) and the several Cyrillic encodings from
http://www.ctan.org/pkg/latex-cyrillic.
Under TeXLive 2018, the input encoding defaults to utf8, if there is no call to
inputenc. The added test file fails without the patch but compiles fine, if the
file "tis620.def" is present in the TEXPATH.
utf8-plain (Unicode (utf8 XeTeX)) is a power-user setting
for the input encoding with two use cases:
a) setup of system fonts or
b) setup of input encoding supportuser preamble
in the document class or user preamble.
The test file is an example for use case b.
Don't insert empty line when translating QuoteInsets to literal
quotes.
Fix regexp pattern in re/convert_dashligatures.
Adjust logic in re/convert_dash(ligatur)es.
* use unicode.transform() instead of loop over replacements
* telling variable names
* remove trailing whitespace
* documentation update
* don't set use_ligature_dashes if both dash types are found
* remove spurious warning, normalize indentation, and use
Python idioms in revert_baselineskip()
iconv fails, if a nomenclature inset contains an uncodable character
This led to failure of the indonesian UserGuide in the attic.
Fix it there and add a minimal, specific test sample instead.
Encoding cp858 is only supported by some iconv variants
Gnu iconv only supports it, if configured with "--enable-extra-encodings"
(see https://www.gnu.org/software/libiconv/)
Maybe drop support or add a configuration check?
The xfrac package is the "state of the art" for "split-level" (nice) fractions.
Character replacements look consistent, scale properly and fit in the line.
Fixes#5220.
As Enrico pointed out in http://www.mail-archive.com/lyx-devel@lists.lyx.org/msg131411.html, the loading error of testcases_speed.lyx is caused by tags from a LyX development version that were later removed but never handled in lyx2lyx.
LaTeX export still fails with
! Argument of \xargs@grab@opt has an extra }.
Test unicodesymbols for most supported input encodings with Kornel's addition to ctests.
Add required "forces" to unicodesymbols:
* utf8x does not support all characters supported by LyX
* several 8-bit encodings map characters to math-mode commands - force replacement in text-mode so that LyX can wrap them in \\ensuremath.
Fix a misalignment (wrong replacements) in the Cyrillic Unicode block.
Use \\mathscr for Mathematical Script characters in Mathematical Alphanumeric Characters (in line with the characters in other unicode blocks.
Rename the directory for test samples "export/latex/Unicode-characters" to "export/latex/unicodesymbols". This matches the purpose to test the lib/unicodesymbols file.
First run of Kornels patch for tests with all input encodings in lib/encodings.
Remove redundant sample files - keep only one sample and change the input encoding in the test script.
Put remaining failing test in "unreliableTests" for later sorting...