When using CJK, don't try to close a language that was never
opened before, such as when it is the main language.
(cherry picked from commit 7e51b5f301)
- If a display math not starting a new paragraph is deleted, the
current \lyxdeleted macro (if any) must be closed and a new one
started, otherwise the display math will be shifted up.
- Use \linewidth instead of \columnwidth because the former will adapt
to the reduced horizontal width in list environments, avoiding shifting
to the right the diplay math.
(cherry picked from commit 7f23ca912c)
This commit does a bulk fix of incorrect annotations (comments) at the
end of namespaces.
The commit was generated by initially running clang-format, and then
from the diff of the result extracting the hunks corresponding to
fixes of namespace comments. The changes being applied and all the
results have been manually reviewed. The source code successfully
builds on macOS.
Further details on the steps below, in case they're of interest to
someone else in the future.
1. Checkout a fresh and up to date version of src/
git pull && git checkout -- src && git status src
2. Ensure there's a suitable .clang-format in place, i.e. with options
to fix the comment at the end of namespaces, including:
FixNamespaceComments: true
SpacesBeforeTrailingComments: 1
and that clang-format is >= 5.0.0, by doing e.g.:
clang-format -dump-config | grep Comments:
clang-format --version
3. Apply clang-format to the source:
clang-format -i $(find src -name "*.cpp" -or -name "*.h")
4. Create and filter out hunks related to fixing the namespace
git diff -U0 src > tmp.patch
grepdiff '^} // namespace' --output-matching=hunk tmp.patch > fix_namespace.patch
5. Filter out hunks corresponding to simple fixes into to a separate patch:
pcregrep -M -e '^diff[^\n]+\nindex[^\n]+\n--- [^\n]+\n\+\+\+ [^\n]+\n' \
-e '^@@ -[0-9]+ \+[0-9]+ @@[^\n]*\n-\}[^\n]*\n\+\}[^\n]*\n' \
fix_namespace.patch > fix_namespace_simple.patch
6. Manually review the simple patch and then apply it, after first
restoring the source.
git checkout -- src
patch -p1 < fix_namespace_simple.path
7. Manually review the (simple) changes and then stage the changes
git diff src
git add src
8. Again apply clang-format and filter out hunks related to any
remaining fixes to the namespace, this time filter with more
context. There will be fewer hunks as all the simple cases have
already been handled:
clang-format -i $(find src -name "*.cpp" -or -name "*.h")
git diff src > tmp.patch
grepdiff '^} // namespace' --output-matching=hunk tmp.patch > fix_namespace2.patch
9. Manually review/edit the resulting patch file to remove hunks for files
which need to be dealt with manually, noting the file names and
line numbers. Then restore files to as before applying clang-format
and apply the patch:
git checkout src
patch -p1 < fix_namespace2.patch
10. Manually fix the files noted in the previous step. Stage files,
review changes and commit.
Make sure to properly nest \begin{lang} and \end{lang} tags even
when no language package is selected. In this case, LyX assumes
that babel is being used, so the language names might be wrong
if the user arranged for using polyglossia in the preamble.
Nevertheless, we assure that the produced output is syntactically
correct, so that by adding proper preamble code a correct output
is still possible.
This commit fixes the regression introduced in 2.2 about the
output of en- and em-dashes. In 2.2 en- and em-dashes are output as
the \textendash and \textemdash macros when using TeX fonts, causing
changed output in old documents and also bugs (for example, #10490).
Now documents produced with older versions work again as intended,
while documents produced with 2.2 can be made to produce the exact
same output by simply checking "Don't use ligatures for en-and
em-dashes" in Document->Settings->Fonts.
When exporting documents using TeX fonts to earlier versions, in order
to avoid changed output, a zero-width space character is inserted after
each en/em-dash if dash ligatures are allowed. These characters are
removed when reloading documents with 2.3, so that they don't accumulate.
This avoids some duplicate code. Note that the return value of
Paragraph::getAlign had to be changed. I suspect it was set to char to
avoid reading one header file in Paragraph.h.
LyX assumes that everything in \lyxdeleted is struck out by ulem
and increases the corresponding counter. However, deleted display
math material is struck out using tikz. As we also take into
account the deletion of underlined display math (in order to
properly position such material vertically), we have to take
care that the count is correct.
It should be now possible underlining or striking out any kind
of math inset containing any math construct indigestible to ulem.
While this was already possible for inline math insets, they could
have break if an aligned environment was used, for example.
This is now possible also for diplay math. Even if this can be
nonsensical and not visually perfect, at least no latex errors
should be generated if one tries to.
Font changes are brought inside the \lyxdeleted macro, just before
outputting the latex code for the math inset. The inset writes a
signature before itself and this is checked by \lyxsout for recognizing
a display math. So, the font changes confuse \lyxsout, which also
swallows the first macro at the very start of \lyxdeleted. The result
is that the font changing command is not seen by latex and \sout is also
used to further strike out the formula already striked out by tikz.
This commit makes sure that the expected signature actually appears
just after the opening brace of \lyxdeleted. It also accounts for a
paragraph break occurring just before the math inset, in order to not
introduce too much vertical space, which is noticeable when using
larger font sizes.
Showing deleted display math by enabling "Show Changes in Output" was
only possible with dvi (through dvipost). Although LyX strikes out
such formulas on screen, it was impossible obtaining an output
directly using pdflatex (or other engines producing pdf) because
ulem cannot cope with display math material and gives errors.
The solution is to strike out by ourselves such deleted formulas.
I took into account several options. One of them would produce
an output similar to dvipost (which strikes out each element), but
would have required much more changes in the output routines.
Eventually, I opted for using tikz, which gives a more clean
output (as it requires to simply adding a preamble and a postamble
to the latex code of any displayed math, instead of a mark up
tailored to each particular math construct). The look of the pdf
output is similar to the way LyX strikes out the equations on screen.
This enables error reporting for the preamble, provided the preamble is written
using the new InPreamble layouts.
In the future, I find it preferable to deprecate the usual preamble in favour of
InPreamble layouts rather than implementing error reporting for the usual
preamble. This requires some improvements to code editing in the buffer view
first (line breaking behaviour, syntax highlighting).
texstring is a pair of a docstring and a corresponding TexRow. The row count in
the TexRow has to match the number of lines in the docstring.
otexstringstream is an output string stream that can be used to create
texstrings (i.e. it's an odocstringstream that records the TexRow information
and let us extract a texstring from it).
texstrings can be passed around and output to otexstream and otexrowstream,
which produces an accurate TexRow information by concatenating TexRows.
dealing properly with the paragraph separator tag.
We really need to use that tag as a kind of general marker for which
tags we're responsible for in a given paragraph and which tags we are
not. So the changes to InsetText.cpp use the tag as that kind of marker.
Note that, as of this commit, the User Guide again exports without any
kind of error. I haven't yet checked the other manuals.
This fixes bug #8022.