If Document>Settings>Language>Encoding is set to any value except "auto" or "default", we
expect the whole document to use this encoding. Wiht encodings from the CJK package, this means
one big "CJK" environment and no encoding switches.
Characters that are not handled by the CJK package need to be "forced" in lib/unicodesymbols.
This is completed for "euc-cn", the others will follow.
LyX expectedly gives the following warning:
TextClass.cpp (1385): The layout does not provide a list command for
the float `sidebar'. LyX will not be able to produce a float list.
This issue was reported to the maintainer. This commit is consistent
with 00f7a95f.
It used to give an endless loop, so we "ignored" it (did not run the
test). Now it gives a lyx2lyx warning, which is reported at #11455,
so it is appropriate to invert the test.
The 001-4-latin_utf8x_pdf2 test passes and the
001-4-latin_utf8-cjk_pdf2 test fails, which means that there are
characters in the .lyx file that are only available with
utf8-extended encoding, so the utf8 test is never expected to pass
in the future.
Thanks to Kornel.
utf8-plain (Unicode (utf8 XeTeX)) is a power-user setting
for the input encoding with two use cases:
a) setup of system fonts or
b) setup of input encoding supportuser preamble
in the document class or user preamble.
The test file is an example for use case b.
The Korean splash.lyx is expected to fail with pdflatex. The lyx22x
and lyx23x tests were not failing before because they were exporting
to XeTeX with system fonts, which succeeds. After c9e62dec (which
corrects the export format to the default), the lyx22x and lyx23x
tests should be inverted.
Use the LaTeX internal character representation (LICR) macros
provided by lgrenc.def (since version 0.8 from 2013-05-13)
in lib/unicodesymbols. This fixes the PDF bookmarks (except for the
legacy input encoding iso-8859-7) and solves the problem of a missing
"v" character in Libertine LGR fonts (see lyx-users from 2018-01-29).
The ctest unicodesymbols/008-greek-and-coptic_iso8859-7_pdf2" now fails
(due to #9681). This is not a regression, as it is already
"unreliable" (wrong output, Latin character instead of Greek).
Drop compatibility definition of \~ as perispomeni accent
(that was required with lgrenc.def < 0.8).
The export ja/lilypond_pdf fails because ps2pdf gives an error. It
is thus still inverted, under the category 'externalissues'. As
Jürgen discovered, ps2pdf succeeds if the -dNOSAFER flag is used.
Note that Kornel is seeing strange behavior with the sweave test,
and thus the label of that test might be changed soon (e.g. to
"unreliable"). For discussion, see:
https://www.mail-archive.com/search?l=mid&q=20171001032524.fr5xfngylththwv2%40steph
This test started failing after 8bf3d7bb. I did not look deeply into
why, because the corresponding de and es tests were already
inverted, and because in general we do not expect texF tests to work
well.
iconv fails, if a nomenclature inset contains an uncodable character
This led to failure of the indonesian UserGuide in the attic.
Fix it there and add a minimal, specific test sample instead.
We don't invert unreliable tests for the same reason they are
inverted but, e.g., a nonstandard test that fails for some reason even with the
additional requirements installed or a test that shows wrong output
but also an error.
An update in TeX Live causes the test to pass (also for Kornel), so
now we uninvert the test.
I looked at the output file, and it seems fine to me (although it is
long, and I just checked briefly).
The new TeXLive uses font encoding TU for Unicode fonts with Xe- and LuaTeX.
The command \textquotedbl for straight quotes is no longer supported,
\textipa no longer supported with LuaTeX.
Problems with Spanish Babel and Xe/LuaTeX with 8-bit fonts lead to new errors
in some cases.
Using this label in invertedTests expands the testname unnecessary, so that
we get e.g. labels like:
SUSPENDED.UNRELIABLE.WRONG_OUTPUT.UNRELIABLE_export/doc/de/EmbeddedObjects_pdf4_texF
OTOH, if using label 'unreliable', we get a warning about label-names clash.
The best is to reset any previous label setting.
This encoding (modified Mac Cyrillic for Asian languages) is rarely used and not supported by Gnu iconv.
Update comments in lib/encodings.
Update ctests: Gnu iconv only supports cp858, if configured with "--enable-extra-encodings".
The missing character problem is fixed upstream.
Also fix the scaling of the \sun-symbol-index by wrapping the symbol in \text.
(wasysym's \sun is valid in text and math mode. LyX currently adds a spurious \ensuremath.)
These tests are "unreliable" and thus their export status contains
less information than reliable tests. However, it contains some
information and could still be used to find regressions. This commit
helps keep the output of a vanilla "ctest" command clean.
See discussion here:
https://www.mail-archive.com/search?l=mid&q=20161127205800.epvjxkeri5yoeqwj%40steph
Test unicodesymbols for most supported input encodings with Kornel's addition to ctests.
Add required "forces" to unicodesymbols:
* utf8x does not support all characters supported by LyX
* several 8-bit encodings map characters to math-mode commands - force replacement in text-mode so that LyX can wrap them in \\ensuremath.
Fix a misalignment (wrong replacements) in the Cyrillic Unicode block.
Use \\mathscr for Mathematical Script characters in Mathematical Alphanumeric Characters (in line with the characters in other unicode blocks.
Specify non-TeX fonts that work in the source for documents that
fail with "missing characters" if compiling with "non-TeX fonts"=true.
(This does not interfere with the default output in any way.)
Add an exception to the conversion of "missing character" warnings into errors.
The PGF package deliberately uses the dummy font "nullfont" to suppress output.
Therefore, warnings about missing characters in "nullfont" are really only warnings.
Also updated the comment: "Missing character" warnigns are especially widespread
in XeTeX/LuaTeX but can also happen with "classical" 8-bit TeX.
Feel free to port this to branch.