At the time, there were two competing packages for French language:
frenchle (aka french.sty), the historical one, and frenchb.ldf, the
new kid on the block. I was difficult to know which one was loaded by
babel, and frenchle did not define \og and \fg. Thus the need for our
own definition.
These were the good old days, but this time is gone for good.
It is now possible to specify in the lib/language file whether screen
rows can be broken anywhere (CJK languages) or only at work boundary.
Set WordWrap to false for the CJK languages (notice that japanese-cjk
had been forgotten before).
Moreover, remove a test for separators in row element that was not
really helpful.
Fixes part of ticket #10299.
FWIW this code is important for very old versions of lyx, older than 1.1.5 (released 19 years ago - 2000/06/06).
Funny fact of the day, byte strings do not behave as regular strings in python3 when taking and index.
To get a sub-string we need to pass a range, a integer index will not work as it happens in a regular string:
$ ipython3
...
In [30]: line
Out[30]: b'#This file was created by <mike> Tue Jan 25 10:36:51 2000'
In [31]: line[0]
Out[31]: 35
In [32]: line[0:1]
Out[32]: b'#'
The range notations works for both byte and regular strings in python 3, and it also works in python 2.
Thus the change is simple and effective. In any case I should confess that I was quite surprised by this. :-)
In python it is possible to compare tuples with a lexicographic order.
Take advantage of that since there is no need to resort to the C-trick of converting a version in hex format.
We need to set a dummy version in case we are using ImageMagick to ensure that version is always an integer 3-tuple.
It worked in python2 but not the way the authors imagined. Because hex always returns a string.
From python2:
>>> 1 > "2"
False
>>> "2" > 1
True
>>> "1" > 2
True
The rational is that an integer is always smaller than a string.
In python 3 this because it does not make sense to compare objects of different types.
Amends 7bb30286.
Tested cases are now handled fine.
(There are still many cases where the language support emulation
is too complex for lyx2lyx and manual fixes are required after
lyx2lyx conversion.)
Correct or activate some already present shortcuts, and add new ones
for easily obtaining the most common fixed size delimiters.
Pressing '*' after a delimiter will cycle through all sizes.
This is related to the bug #11457 saga and it was my fault.
The debug files should be written only be on if the argument --debug is passed and not --verbose as it was done by mistake.
This effectively allow paragraph breaks in insets only for cosmetic
reasons (e.g., to align contents on different lines).
This is the last change necessary for an enhanced covington gloss support
(which uses the new covington gloss ui)
In python 3 the colors need to be strings and not bytes:
This was the equivalent of
>> print("%s" % b"1")
"b'1'"
since the colors were bytes the call to dvipng was something like
dvipng -Ttight -depth -height -D 115 -fg "b'rgb 0.937255 0.941176 0.945098'" -bg "b'rgb 0.137255 0.149020 0.160784'" "lyxpreviewxBJEqm.dvi"
Note the "b'rgb after both -fg and -bg that wrecked havoc and thus dvipng failed. That was the difference between python2 and python3 calls.
The "Rows & Columns" optional submenu is more easily accessible in
the math context menu rather having to navigate to the "Edit" menu.
All possible accelerators are already taken, so use the space bar.
Same for BackTab. The outline-in was originally (31398779)
introduced to the command-sequence at the end. Probably it was
placed at the end to be conservative (i.e., so that it would only
change behavior where there was a no-op before).
This fixes#11576.
Following the suggestion in the Babel-Azerbaijani documentation,
we use the glyphs from the Cyrillic fonts for the Latin
text character. This fits better than IPA fonts (assuming there are matching
Latin and Cyrillic fonts specified) and also provides bold etc.
Latin Modern works fine with Japanese.
If "lmodern" is set for \font_roman the "lmodern.sty" package sets
sans-serif and teletype to Latin Modern fonts as well.
Therefore, \font_sans and \font_teletype are better left as "default"
(less preamble code) in the LaTeX source).
The "outer" language of the table was set to English leading to wrong output
(swapped columns and words with non-TeX fonts, wrong characters with TeX-fonts).
The algorithm in [c9be8bff74b233/lyxgit] did not
account for layout nesting. As a result, some parentheses
were swapped in English text parts
(e.g. around "(for Linux)" in he/Intro.lyx).
AsBabelOptions was introduced 2010 in [cc5dd37a2a05/lyxgit].
Since the re-orgianization and opening of the Babel package to
"contributed" language definitions in March 2013, it is no longer required.
Clean up after Part 1 [1361f1a45f23/lyxgit].
PDF outline improves with unicode/utf8 (although some chars still wrong).
Math: ERT for umlauts no longer required (now force-converted with unicodesymbols)
Thai works fine with LuaTeX, TeX-fonts and auto-legacy input encoding.
Remove obsolete preamble code,
we now load "fontenc" with Japanese documents by default.
since we auto-load "textcomp" now also for encodable characters,
we no longer need to force conversions defined in ts1enc.dfu.
FIXME: this is currently not working as intended, because
exclusion (force != ...) seems to fail with a list of encodings
and the characters are nevertheless force-converted.
This reverts commit c56adfc8ec.
I am reverting this because LyX uses an italic font for representing
mathalpha symbols and it is funny when a vertical arrow looks like
a leaning tower.
The unicodesymbols file should be audited in order to add the
mathalpha flag to all symbols having a math representation.
If the flag is missing, when pasting in mathed a given symbol
with a math definition, one gets \text{\ensuremath{\symbname}}
because LyX assumes that the symbol is a textmode one by default.
* do not ignore Japanese (platex) with system fonts.
* CJK can be used with XeTeX and TeX-fonts if the input encoding is utf8.
do not ignore.
* TODO: set non-TeX fonts and uninvert where possible.
While not required for hyphenation, using T1 as default font encoding
helps with text in Latin script (pre-composed accented characters,
Nordic letters "eth" and "thorn").
Fixes wrong and missing characters in text parts in other languages
(platex does not support "inputenc").
Fixes compilation errors due to desynchronized encoding switches.
* Force unicodesymbols conversion for all *-platex input encodings,
* except some characters that work well in utf8.
* Use platex if document language is "japanese" and input encoding is "utf8".
The category tag was rarely used and thus not very useful. This adds
categorization to most modules (the rest will follow) and uses the
\DeclareCategory tag we use in layouts rather than the extra syntax
we used in modules. Categories are now added to the po files and
translated.
Note that this is work in progress: the current categories are still
subject to change.
The ultimate goal of this is to sort the modules in the GUI by category
as we do with layouts, examples and templates (and add a filter to search
for specific modules)
As it is now (with the many modules we accumulated), the module selector
is not really usable anymore. If you don't happen to know how exactly a
module is named, selecting a module is really a PITA.
* New: support also utf8 (working around false positive test in "inputenc.sty").
* Do not force the change of input encoding to "ascii".
Deny compilation with XeTeX if a document uses TeX fonts and a non-supported input encoding.
* some Japanese (platex) documents fail with inputenc "utf8-platex"
(missing characters in non-Japanese text parts), because the
Unicodechar definitions from "inputenc" are not used.
* some Japanes (platex) documents show wrong output with "auto",
because platex ignores the encoding switch for text parts
in other languages.
* Japanese Beamer documents must set default output to "pdf",
because dvipdfm(x) produces wrong output with document class "Beamer".
* update tagging/inverting rules.
* use HE8 font encoding for Hebrew in language test.
While HE8 provides more characters and prevents use of bitmap fonts,
forcing its use may break older installations.
The dedicated test file 012_hebrew_he_HE8.lyx provides an
example for use of HE8 encoded fonts with babel-hebrew.
The "nikud" (vowel) signs, shindot, and shindot are combining Unicode
characters. However, LaTeX-Hebrew expects them as postfix characters, not
accent macros (cf. www.cs.tau.ac.il/~stoledo/Bib/Pubs/vowels.pdf).