Use cases from #12352. The files correspond to
mwe-remove-bottom.23.lyx and mwe-remove-top.23.lyx. The only
differences are that I changed to FreeSans and FreeSerif for the
fonts.
Start a new directory of cprotect tests. There are many situations
where cprotect is needed so we can add files covering various
situations as we find them.
This particular test covers the case of special characters in URL
insets in footnotes.
Noto Sans Tibetan was not actually a sans font. On newer systems the
font is now Noto Serif Tibetan.
See, e.g., the package 'fonts-noto-core' in Ubuntu 21.04.
This change also fixes compilation of
supported-languages_polyglossia-XeTeX.lyx on Ubuntu 21.10.
FontEncoding L7x required for hyphenation but no longer set
by Babel (since 2017-12-06).
The PostBabelPreamble now sets L7x for Lithuanian, if it is defined
and restores the previous font encodng on exit.
This happens with "inputenc: auto-legacy" if a language with default
encoding "utf8" (e.g. Turkmen or Mongolian) is used in a Quote
(or another environment).
Simplify user preamble.
Use common test document for Xe- and LuaTeX with polyglossia
and special one for languages only supported by XeTeX.
Update tagging patterns and comments.
Allow use for font MonomakhUnicode. The font is available in texlive
and making a symbolic link in ~/.fonts/fonts to point to the appropriate
directory makes the font available to the system too.
LyX follows LaTeX in dropping support for this combination
(it only worked by tricking "inputenc.sty").
There is no known case where this combination is required or helpfull.
For power users with special needs, XeTeX + TeX fonts is still
available after setting the input encoding to "ascii" or "utf8-plain".
See also #10600.
Following the suggestion in the Babel-Azerbaijani documentation,
we use the glyphs from the Cyrillic fonts for the Latin
text character. This fits better than IPA fonts (assuming there are matching
Latin and Cyrillic fonts specified) and also provides bold etc.
Encoding cp858 supported by only some iconv variants.
Most users will want to change their "encoding" setting instead
of installing/recompiling "iconv" to support this legacy encoding.
ctests are likely will fail with either "vanilla" or "enhanced"
iconv and test a situation that is unlikely to change generally,
so we ignore this test now by default.
Separate xetex-inputenc test sample in working and non-working parts.
Sort HTML-only tests.
Update tagging and ignore-rules.
Change inputencoding to utf8 in dedicated tests (get pdf4_texF working).
Thai works fine with LuaTeX, TeX-fonts and auto-legacy input encoding.
Remove obsolete preamble code,
we now load "fontenc" with Japanese documents by default.