A branch inset modifies the layout of the internal structures in
which the text is organized. When a branch is active, it is as if it
was not there, but its only presence makes a paragraph which would not
be the last one to actually be the last one, or the check for the
language of the previous paragraph to fail because there is no
previous paragraph before the first one in a branch inset.
Oney way I found to tackle it, is tracking whether the typesetted
paragraphs are actually part of an active branch inset and acting
accordingly.
Fixes wrong and missing characters in text parts in other languages
(platex does not support "inputenc").
Fixes compilation errors due to desynchronized encoding switches.
* New: support also utf8 (working around false positive test in "inputenc.sty").
* Do not force the change of input encoding to "ascii".
Deny compilation with XeTeX if a document uses TeX fonts and a non-supported input encoding.
This accesses the inulemcmd output param which protects specific commands
(\cite, \ref) in an \mbox.
This is needed in ulem and soul commands, since their complex
detokenization makes such commands (who produce multiple words via local
assignment) fail.
So now it is possible to properly support ulem and soul via
[inset]layout
Fixes a case reported in #9404
Remove special code for CJK that is no longer required after
we use CJKutf8 document-wide with inputenc "utf8-cjk"
(and "utf8" for languages requiring CJK) (since 7bbf333fa1).
CJK characters can no longer be used with a document-wide 8-bit encoding.
(Hint: Use utf8-cjk or one of the CJK legacy encodings if your document contains CJK characters.)
1.) Make sure the environment is mentioned in the string for search
(Added the keyword \latexenvironment{...})
2.) Handle it similar to \textcolor{}
That way we can also search for 'conclusion*' or 'summary' etc
in Additional.lyx.
The specific test was introduced in ef6be5f4 because
CJKutf8 was relatively new (cf. lyx.org/trac/ticket/5386).
10 years on, CJKutf8 is an established part of the CJK bundle
and we can skip the special test for CJKutf8 to make the logic
considerabely simpler to read, maintain and debug.
The problem here is, that selecting any subset of a \lettrine{}
line always creates an initials header. That makes it impossible
to our search engine to find strings, because the regex does not
contain that info. So we have to discard the leading \lettrine part
completely.
We place now a marker (\endarguments) to determine that removable
part.
If Document>Settings>Language>Encoding is set to any value except "auto" or "default", we
expect the whole document to use this encoding. Wiht encodings from the CJK package, this means
one big "CJK" environment and no encoding switches.
Characters that are not handled by the CJK package need to be "forced" in lib/unicodesymbols.
This is completed for "euc-cn", the others will follow.
A \clearpage command issued right before \end{CJK} is recommended by the
package author to prevent any un-processed CJK chars outside the
\begin{CJK} and \end{CJK} scope. Otherwise, TOC, header, footer,
and may contain CJK chars but get processed outside the CJK environment scope.
Tha new dedicated export test fails without the fix.
The Thai tis620-0 input encoding is supported via the inputenc "plug in"
(data) file tis620.def from https://ctan.org/pkg/babel-thai.
We can handle it like the other contributed input encodings, e.g.,
Greek (ISO 8859-7) and the several Cyrillic encodings from
http://www.ctan.org/pkg/latex-cyrillic.
Under TeXLive 2018, the input encoding defaults to utf8, if there is no call to
inputenc. The added test file fails without the patch but compiles fine, if the
file "tis620.def" is present in the TEXPATH.
The problem was, that the different list ennvironments
did not look different in tha latex output used for
search.
So the input of "\item ..." did not give information
if it is description, lyxlist, enumeration or labeling.
In search modus we use now "\item{enumeration}" etc.
The change is significant if the search format is not disabled.
We try to analyze the pattern string first to get needed features
for the search.
We try to analyse the searched string and if it does not
contain all expected featers (color, language, char style, char decoration)
Still some problems though
If an RTL language is set via environment in polyglossia, only a nested
\\text<lang> command will reset the direction for LTR languages
Fixes rest of # 10111.
This uses the InsetArgument interface to provide access to a document
part hitherto inaccessible by LyX: the part between \begin and the first
\item in a list (where lengths and counters can be redefined, for
instance).
Fixes: #11098
File format change, layout format change
This allows (some) verbatim contents in macros, such as \url's with
specific chars (#, % etc.) in section headings or footnotes (#449)
or comments in captions (#9313).
The mentioned two bugs are fixed by this commit.
Note that the implementation is still rather basic and might need
extension for other cases.
This gets rid of the hardcoded latin1 encoding for verbatim. Instead,
verbatim now inherits the encoding from the context, which is what is
actually wanted here.
Fixes: #9012, #9258
In some insets such as Arguments, a local language switch has to be
used. However, if the language inside the inset was set to be equal
to the outer language, the code decided not to switch language. But
then got confused and tried to close a switch that was never opened.
This patch forces the switch even if the outer language is the same.
This commit does a bulk fix of incorrect annotations (comments) at the
end of namespaces.
The commit was generated by initially running clang-format, and then
from the diff of the result extracting the hunks corresponding to
fixes of namespace comments. The changes being applied and all the
results have been manually reviewed. The source code successfully
builds on macOS.
Further details on the steps below, in case they're of interest to
someone else in the future.
1. Checkout a fresh and up to date version of src/
git pull && git checkout -- src && git status src
2. Ensure there's a suitable .clang-format in place, i.e. with options
to fix the comment at the end of namespaces, including:
FixNamespaceComments: true
SpacesBeforeTrailingComments: 1
and that clang-format is >= 5.0.0, by doing e.g.:
clang-format -dump-config | grep Comments:
clang-format --version
3. Apply clang-format to the source:
clang-format -i $(find src -name "*.cpp" -or -name "*.h")
4. Create and filter out hunks related to fixing the namespace
git diff -U0 src > tmp.patch
grepdiff '^} // namespace' --output-matching=hunk tmp.patch > fix_namespace.patch
5. Filter out hunks corresponding to simple fixes into to a separate patch:
pcregrep -M -e '^diff[^\n]+\nindex[^\n]+\n--- [^\n]+\n\+\+\+ [^\n]+\n' \
-e '^@@ -[0-9]+ \+[0-9]+ @@[^\n]*\n-\}[^\n]*\n\+\}[^\n]*\n' \
fix_namespace.patch > fix_namespace_simple.patch
6. Manually review the simple patch and then apply it, after first
restoring the source.
git checkout -- src
patch -p1 < fix_namespace_simple.path
7. Manually review the (simple) changes and then stage the changes
git diff src
git add src
8. Again apply clang-format and filter out hunks related to any
remaining fixes to the namespace, this time filter with more
context. There will be fewer hunks as all the simple cases have
already been handled:
clang-format -i $(find src -name "*.cpp" -or -name "*.h")
git diff src > tmp.patch
grepdiff '^} // namespace' --output-matching=hunk tmp.patch > fix_namespace2.patch
9. Manually review/edit the resulting patch file to remove hunks for files
which need to be dealt with manually, noting the file names and
line numbers. Then restore files to as before applying clang-format
and apply the patch:
git checkout src
patch -p1 < fix_namespace2.patch
10. Manually fix the files noted in the previous step. Stage files,
review changes and commit.
Make sure to properly nest \begin{lang} and \end{lang} tags even
when no language package is selected. In this case, LyX assumes
that babel is being used, so the language names might be wrong
if the user arranged for using polyglossia in the preamble.
Nevertheless, we assure that the produced output is syntactically
correct, so that by adding proper preamble code a correct output
is still possible.
Do not take the optional arguments from the first paragraph, but from the
paragraphs whose layout have no arguments, consistently with the code in
InsetArgument::updateBuffer (i.e. what was shown on screen).
This adds support for the chapterbib package, but also adds ways to
produce this sort of multibib with biblatex and bibtopic (which are
both incompatible with chapterbib).
File format change.
Until now this was not done for essentially two reasons. The first
one is that local switches are used for short text insertions, so that
they are unlikely crossing environment boundaries. The second one
is that if we have to close a language at the end of an environment
we would be missing the right termination command. As this last
issue can be overcome by simply storing in the stack the current
nest level with a sign denoting the kind of switch, there is no
reason anymore not to track also local languages switches.
Also factor out some commonly used constructs in order to improve
readability.
If the document language is opened outside of any environement, we risk
not closing it if no other language switch occurs. Indeed, the stack is
emptied only at the end of an enviroment. We could of course also empty
it at the end of the document, but we would have an unnecessary language
switch.
When using polyglossia, lyx was making a real mess when changing
language inside nested insets. The \begin{language} and
\end{language} commands were not well paired such that they could
easily occur just before and after the start or end of an
environment. Of course this was causing latex errors such that
"\begin{otherlanguage} ended by \end{environment}".
There may still be some cases I did not take into account.
Since we process layouts sequentially, we export LaTeX code for the
title once we arrive to a layout that has InTitle false. If the
document then later has a layout with InTitle true, we do not
(currently) go back to add it to the title and just output it
in-place. We previously warned with LYXERR0, but since this can
cause missing or unexpected output we now warn in the GUI.
For more information, see the following lyx-devel thread:
https://www.mail-archive.com/search?l=mid&q=a65ae226-d3bd-8fc5-a93b-7bb23f1cda82%40lyx.org
Prevent encoding changes whenever the TeX engine is XeTeX or LuaTeX,
as XeTeX/LuaTeX use only one encoding per document:
* with useNonTeXFonts: "utf8plain",
* with XeTeX and TeX fonts: "ascii" (inputenc fails),
* with LuaTeX and TeX fonts: only one encoding accepted by luainputenc.
+1 no needless encoding switches
+1 runparams.encoding matches the correct encoding at any time
+1 less complicated code.
-1 there may still be problems with CJK (possibly impossible to
solve for Xe/LuaTeX with TeX fonts).
For LuaTeX & TeX fonts, the complete document uses the encoding
of the global document language.
See also #9740.
Actually, the changed tests were used to prevent overwriting the encoding
changed in Buffer::writeLaTeX with a language-default encoding.
This is still required for XeTeX with TeX-fonts unless a proper solution is found.
Documents with more than one encoding and TeX-fonts fail with LuaTeX,
as "luainputenc" can only handle one encoding.
With inputenc == "auto" or "default", the encoding changes with
the language and must be reset after an eventual language switch in insets
or environments (see #6216).
However, whether we need to do this does not depend on 8-bit TeX vs. LuaTeX
but on the possible use of more than one encoding for the document.
With "nonTeXFonts", the encoding is utf8,
LuaTeX with TeX fonts requires encoding handling similar to 8-bit TeX.
(Additionally, the value of "params.inputenc" could be tested: if it is
not "auto" or "default", we have just one common encoding and could skip
the reset as well.) Not sure how much time this saves, though.
This is a tiny simplification that makes understanding the code more easy and
will help in fixing the remaining regressions. The logic remains the same, no
export test result is changed.
Fixes output for 3 of the 4 test lyx-files.
Includes "FIXME"s at places where further action is required to get the XeTeX
export right but I don't know how.
b1c68dccf8 and 46aed6d2b9 fixed some language nesting issues, but introduced
a regression for the case that there is a standard paragraph in a foreign
language, followed by a list (e.g. itemize) in the same language, followed
by the end of the document, as e.g. in lib/doc/de/Additional.lyx. The reason
for this was that not all language ending commands did reset
state->open_polyglossia_lang_ correctly.
I am sure that one can still construct broken corner cases, and I am also sure
that this was already possible before b1c68dccf8 and 46aed6d2b9. However,
this fix seems to fix the most important issues, and to get nesting completely
correct we would probably need some stack-like structure, for languages and
encodings, also for the CJK part (which is not touched at all by this commit).
The code that sets open_polyglossia_lang_ is not only executed for polyglossia,
but also for babel, so we have to use the correct language end command.
* TexRow now computes rows from a DocIterator. In practice, the cursor
highlighting is now correct inside insets, it is no longer restricted to the
topmost level. It certainly also makes forward-search more precise.
* Added the option to disable a texrow when not needed, for perf.
* Fixed a bug where the last paragraph was not properly highlighted.
Limitations:
* TexRow still does not handle: math (e.g. multi-cell), sub-captions, inset
arguments.
There are still a few warnings of the kind
(style) Variable 'x' is assigned a value that is never used.
since I did not touch code where I was not sure whether there might be a real
bug, and I kept some for symmetry reasons as well.
Newer boost versions use complicated type traits for boost::next and
boost::prior, which do not work with the RandomAccessList iterators.
The long term solution is to use std::next and std::prev, for now supply
simple replacements for compilers that do not support C++11 yet.