Thanks to Jürgen, who mentions the following:
luaotfload does not find "DavidCLM". In fact, at least on my system,
there is no such font, only "DavidCLM Medium" (and other shapes). This
one is found. Apparently, luaotfload cannot infer from the one to the
other.
As opposed to LuaTEX, XeTeX also queries TEXMF so maybe it just finds
its font there.
In many cases, round trip with older formats involves exporting ERT
or preamble code in the backwards conversion. In the forwards
conversion, if this code is not parsed, often errors can result.
However, in many cases, especially for older formats, it might not
be worth the time or code complexity to address these cases. Such
tests are labled "ertroundtrip".
This commit also inverts a currently failing lyx22x test under the
label "ertroundtrip" since the above paragraph is my best guess as
to why that test is failing. It is likely not worth the time to fix
it, especially since the APA7 layout wasn't even shipped for LyX
2.2.x.
* invert failing lyx2lyx tests for ko/Welcome
* add dedicated test sample
* set language for English text part in ko/Welcome.
Also
* fix a lyx2lyx language test sample
* fix clause in unreliableTests
This happens with "inputenc: auto-legacy" if a language with default
encoding "utf8" (e.g. Turkmen or Mongolian) is used in a Quote
(or another environment).
Debian stable ships now TL18, we don't need to care for older TL versions.
Make CJK-ko documentation more robust (failed with non-TeX fonts and XeTeX,
if LatinModern is not installed system-wide).
The test sample for LyX bug 3059 triggers an error only with
"fontencoding auto-legacy" and can be safely ignored with non-TeX fonts.
Simplify user preamble.
Use common test document for Xe- and LuaTeX with polyglossia
and special one for languages only supported by XeTeX.
Update tagging patterns and comments.
LyX follows LaTeX in dropping support for this combination
(it only worked by tricking "inputenc.sty").
There is no known case where this combination is required or helpfull.
For power users with special needs, XeTeX + TeX fonts is still
available after setting the input encoding to "ascii" or "utf8-plain".
See also #10600.
Amends 7bb30286.
Tested cases are now handled fine.
(There are still many cases where the language support emulation
is too complex for lyx2lyx and manual fixes are required after
lyx2lyx conversion.)
Encoding cp858 supported by only some iconv variants.
Most users will want to change their "encoding" setting instead
of installing/recompiling "iconv" to support this legacy encoding.
ctests are likely will fail with either "vanilla" or "enhanced"
iconv and test a situation that is unlikely to change generally,
so we ignore this test now by default.
Separate xetex-inputenc test sample in working and non-working parts.
Sort HTML-only tests.
Update tagging and ignore-rules.
Change inputencoding to utf8 in dedicated tests (get pdf4_texF working).
* do not ignore Japanese (platex) with system fonts.
* CJK can be used with XeTeX and TeX-fonts if the input encoding is utf8.
do not ignore.
* TODO: set non-TeX fonts and uninvert where possible.
Fixes wrong and missing characters in text parts in other languages
(platex does not support "inputenc").
Fixes compilation errors due to desynchronized encoding switches.
Tenacious bug in babel-ukrainian:
The date-string uses literal unicode characters (not present in TeX-fonts)
that somehow bypass inputenc's utf8 decoding.
* New: support also utf8 (working around false positive test in "inputenc.sty").
* Do not force the change of input encoding to "ascii".
Deny compilation with XeTeX if a document uses TeX fonts and a non-supported input encoding.
* some Japanese (platex) documents fail with inputenc "utf8-platex"
(missing characters in non-Japanese text parts), because the
Unicodechar definitions from "inputenc" are not used.
* some Japanes (platex) documents show wrong output with "auto",
because platex ignores the encoding switch for text parts
in other languages.
* Japanese Beamer documents must set default output to "pdf",
because dvipdfm(x) produces wrong output with document class "Beamer".
* update tagging/inverting rules.
* use HE8 font encoding for Hebrew in language test.
From Günter:
> OK, so in TL18 the Ukrainean "auto-date" (7 березня 2019 р.) fails with
> PDF (XeTeX) and DVI (LuaTeX) but not PDF (LuaTeX).
> Strange. Feel free to invert.
New bug in TeXLive 18.
Missing characters with XeTeX and wrong characters with LuaTeX.
Also:
* Remove spurious (Latin) characters from uk/Intro.lyx
* "wrong-output" tag for Cyrillic documents with XeTeX and TeX fonts.
Documents used deprecated or lookalike characters missing in
Latin Modern system fonts:
Customization.lyx: "figure dash" instead of "emdash".
revtex4-1: "Angstrom sign" instead of "latin letter A with ring".
Prevents wrong or missing characters with LuaTeX and 8-bit fonts.
Also "uninvert" the corresponding test case and two other
no longer failing "unicodesymbols" exports.
If Document>Settings>Language>Encoding is set to any value except "auto" or "default", we
expect the whole document to use this encoding. Wiht encodings from the CJK package, this means
one big "CJK" environment and no encoding switches.
Characters that are not handled by the CJK package need to be "forced" in lib/unicodesymbols.
This is completed for "euc-cn", the others will follow.
LyX expectedly gives the following warning:
TextClass.cpp (1385): The layout does not provide a list command for
the float `sidebar'. LyX will not be able to produce a float list.
This issue was reported to the maintainer. This commit is consistent
with 00f7a95f.
It used to give an endless loop, so we "ignored" it (did not run the
test). Now it gives a lyx2lyx warning, which is reported at #11455,
so it is appropriate to invert the test.
The 001-4-latin_utf8x_pdf2 test passes and the
001-4-latin_utf8-cjk_pdf2 test fails, which means that there are
characters in the .lyx file that are only available with
utf8-extended encoding, so the utf8 test is never expected to pass
in the future.
Thanks to Kornel.
utf8-plain (Unicode (utf8 XeTeX)) is a power-user setting
for the input encoding with two use cases:
a) setup of system fonts or
b) setup of input encoding supportuser preamble
in the document class or user preamble.
The test file is an example for use case b.
The Korean splash.lyx is expected to fail with pdflatex. The lyx22x
and lyx23x tests were not failing before because they were exporting
to XeTeX with system fonts, which succeeds. After c9e62dec (which
corrects the export format to the default), the lyx22x and lyx23x
tests should be inverted.
Use the LaTeX internal character representation (LICR) macros
provided by lgrenc.def (since version 0.8 from 2013-05-13)
in lib/unicodesymbols. This fixes the PDF bookmarks (except for the
legacy input encoding iso-8859-7) and solves the problem of a missing
"v" character in Libertine LGR fonts (see lyx-users from 2018-01-29).
The ctest unicodesymbols/008-greek-and-coptic_iso8859-7_pdf2" now fails
(due to #9681). This is not a regression, as it is already
"unreliable" (wrong output, Latin character instead of Greek).
Drop compatibility definition of \~ as perispomeni accent
(that was required with lgrenc.def < 0.8).
The export ja/lilypond_pdf fails because ps2pdf gives an error. It
is thus still inverted, under the category 'externalissues'. As
Jürgen discovered, ps2pdf succeeds if the -dNOSAFER flag is used.
Note that Kornel is seeing strange behavior with the sweave test,
and thus the label of that test might be changed soon (e.g. to
"unreliable"). For discussion, see:
https://www.mail-archive.com/search?l=mid&q=20171001032524.fr5xfngylththwv2%40steph
This test started failing after 8bf3d7bb. I did not look deeply into
why, because the corresponding de and es tests were already
inverted, and because in general we do not expect texF tests to work
well.
iconv fails, if a nomenclature inset contains an uncodable character
This led to failure of the indonesian UserGuide in the attic.
Fix it there and add a minimal, specific test sample instead.