output, due to failure to clean the ids in the new citation stuff.
I've solved this by allowing the citation format information to contain
keys of the form "clean:key". This signals that we are to apply the
html::cleanAttr() function to the key before returning it. I.e., we
strip non-alphanumeric stuff, basically.
We only need to sort the formats when it is really necessary, i.e. for GUI
purposes.
This also prevents 11000 requests for translation everytime the toolbars
are updated.
The use of this profiler is trivial:
* #include <support/pmprof.h>
* in the block one wants to profile, add
PROFILE_THIS_BLOCK(some_identifier)
* At the end of the execution, statistics will be sent to standard error.
- fileformat change
- it was a pity that LyX did not yet support a simple rectangular frame without a defined width but LyX did this for e.g. oval frames
- \fbox and \mbox often occur in TeX files and can now be imported
Previously, the format used for included pdf files was the same as for
document export via ps2pdf. This caused unwanted conversion routes, e.g.
export via odt->pdf instead of dvi->ps->pdf.
I renamed the format for included graphics and not for exported documents,
since otherwise the command line syntax for export would change. This would
require more adaptions for the users, since with the chosen solution the
custom converters are almost always changed correctly in prefs2prefs(),
so that only custom external templates need manual adjustement.
Before, it was done on editingFinished(). The new behavior is more
intuitive because it is easier for the user to see how editing the box
is connected with enabling/disabling the other widgets. Before, the user
would not get instant feedback and would have to click away before the
connection is revealed. This new behavior is less efficient, however,
because checkEnabled() is called after every keystroke.
need the master buffer here, too: in looking up fonts during XHTML
output.
I'm half tempted to pass the master buffer to these routines, though
there are times, I think, when we want the actual buffer: e.g., when
looking up branches.
* Powerdot now also uses the native overlay item arguments
* a list option argument is finally available
* \pause natively supported (like in beamer)
* support for \onslide (via InsetFlex)
* support for \twocolumn
File format change.
With this commit, old beamer frames are converted to new ones. The old styles are removed (including the infamous \lyxframe).
This should be tested with as much beamer documents as possible (I have already done so), also, tex2lyx now probably produces invalid LyX files.
work we do when calling plaintext() for the purpose of generating
material for the advanced search function.
Here again, not only were we parsing BibTeX files, since Julien's
(sensible) introduction of plaintext output for that inset, but we
were in fact writing (to disk) complete plaintext output for
included files every time we did such a search.
worth doing, as we were creating too much output for tooltips anyway.
But we need to ignore BibTeX insets altogether, as the collection of
the references, etc, is too slow.
so we can write a limited amount when using this for TOC and
tooltip output.
This should solve the problem with slowness that Kornel noticed,
which was caused by our trying to write an entire plaintext
bibliography every time we updated the TOC. We did that because
he had a bibliography inside a branch, and we use plaintext for
creating the tooltip that goes with the branch list.
Other related bugs were fixed along the way. E.g., it turns out
that, if someone had an InsetInclude inside a branch, then we would
have been writing a *plaintext file* for that inset every time we
updated the TOC. I wonder if some of the other reports of slowness
we have received might be due to this kind of issue?
I added it because of a misunderstanding. If the encoding is really set too
late, to_utf8() in the Token constructor would probably fail, but a non-empty
putback buffer is no problem, since it was always created via to_utf8() ->
from_utf8().
Somehow I overlooked that \sideset also supports nonscript arguments for
left and right. This is now fixed, although I do not like the toolbar names.
If somebody knows something better, please improve.
This fixes bug #8554 and some recently introduced busg:
- Encodings::fromLaTeXCommand() can now handle all combining characters,
not only the one letter ones
- The remainder returned from Encodings::fromLaTeXCommand() must never be
thrown away in tex2lyx, but output as ERT
- No special case for combining diacritical marks needed anymore in parse_text()
- No special cases for accents and IPA combining diacritical marks needed
anymore in parse_text()
- special tipa short cuts may only be recognized if the tipa package is loaded
- Use requirements returned by Encodings::fromLaTeXCommand() instead of
hardcoded registering of tipa and tipax
- Get rid of the name2 variable in parse_text(): We must use name, otherwise
the extra stuff that might have been put into name vanishes
This default argument is inserted iff no inset argument is present. This is useful particularly for mandatory arguments that need to have a sensible default value.
tex2lyx skips LyX preamble code only if it thinks that the file was created
by LyX (i.e. special comments are found). I don't like that, but this is
how it works and therefore the special comment neds to be added if the
theorem commands are to be skipped. This header makes also the temporary
setting of in_lyx_preamble obsolete, which caused skipping of user preamble
commands, which were not re-added by LyX.
- Convert prettyref to the autopackage mechanism
- Do not load refstyle automatically if some refstyle preamble code of LyX
is found, since LyX will only load the package if an actual reference
command is used. This is needed for mixed refstyle/prettyref documents.
- Only recognize refstyle commands if refstyle was detected in the preamble
- Only recognize prettyref commands if prettyref was detected in the preamble
- Add a mixed refstyle/prettyref test case
* use verbatimStuff for parsing chunks and make try to follow closely the sweave sytax constraints.
* merge the two cases for parsing << (noweb or quote)
* \verb|ff| requires that its parameter is on a single line.
- support for Cyrillic characters
- support for \textifsymbol and \ascii (fixes bug #8556)
- support for \ding
- tex2lyx/text.cpp: correct an indentation and use "name2" because "name" is already defined in this clause
- for all TIPA characters the tipa and tipx package must always be registered because these characters are also allowed outside a TIPA inset
- include \textvertline because we again reached the maximum number of if statements allowed by MSCV (so safe at least one for now)
The LyX file format only supports one CJK font mapping per document.
Therefore all CJK environments which use a different mapping than the main
one are output in ERT. THis is now explained in the test case, and also the
converion to ERT is improved (although it is still wrong e.g. for paragraphs).
- support for \textdoublevertline, \textvertline, \textglobfall
- support \!o and \!b and \!d and \!g and \!G and \!j
- support and \*k and \*r and \*t and \*w
- register the package "tipa" if \textipa is detected in an equation
- add a \textipa equation to the testfile
(I failed to implement support for the TIPA character "\t*{ }" and the command "\=*".)
- Use the LyX name of encodings instead of the LaTeX names.
The LyX name must be unique, while the name used by LaTeX
not necessarily, e.g. different packages might implement
support for the same encoding.
- Rename koi8 to koi8-r, so that the LyX and LaTeX names match.
- Rename euc-jp-plain to euc-jp-platex, jis-plain to jis-platex
and shift-jis-plain to shift-jis-platex.
- Add utf8-platex encoding (fixes#8408).
LyX file format incremented to 463.
- support for the combining diacritical marks
- support for \texttoptiebar and \textbottomtiebar
- test-insets.lyx: add complete testcase
- TODO.txt: update what still needs to be done
This is achieved by not calling Parse::tokenize_one() anymore in
Parser::good(): The status of the input can be tested without performing the
actual tokenizing. Now there are only two methods that may prevent an encoding
change:next_token() and next_next_token().
The toolbar image is the one Uwe attached to the bug report. Note that
\sideset works only for operators like \sum in the nucleus. LyX allows
any content, so you might get a LaTeX error. I don't know how to prevent
wrong content in the nucleus.
This is an addendum to [72a44b3c/lyxgit] because depending on the environment, LyX adds a \phantomsection before \addcontentsline.
- also update the test file
This is an important part of the tests. If updating the test cases is really a
problem there are two better solutions than not testing the format: Convert
the references with lyx2lyx on the fly, or remove the hard coupling of tex2lyx
and LyX versions again. Both increase additional possible error sources, but
these errors could at least be detected. If the test machinery is made blind
for versions, file format errors are impossible to detect.
Jean-Marc discovered a possible data loss caused by Parser::getChar().
This is now fixed, and Parser::getChar() is removed, since it is no longer
needed and easy to use it in the wrong way.
- nothing needs to be done for feyn.sty - it is already recognized as simple feature and the roundtrip using the feynman example file works
- update entry for undertilde to be uniform
When switching catcodes (for example to verbatim), there are in general some tokens that have already been parsed according to the old catcodes. It is therefore needed to "forget" those token and reparse them.
The basic idea is to use idocstream::putback() to feed the characters back to the stream, but this does not work (probably because we disable buffering). Therefore, we implement a simple stream-like class that implements putback().
The current code is not able to unset an environment variable, only to set it to an empty value. This patch refactors a bit the Message class and uses a new EnvChanger helper class that allows to change temporarily an environment variable and that is able to unset variables if needed.
The patch also adds new functions hasEnv and unsetEnv in environment.cpp.
Open issues:
* there may be systems where unsetenv is not available and putenv("name=") does not do the right thing;
* unsetenv may lead to leaks on some platforms.
* when using unsetenv, we may need to remove strings from the internal map that setEnv uses.
At this moment we do not allow comparison between arbitrary hashes,
but except GUI the code is ready.
Thanks to the powerful way of git addressing we could even ask
for comparisons like '-2 weeks back' if someone wants to play
with GuiCompareHistory.
Only the local index is considered, no remote repo. Also getting the revision
(aka commit hash) is missing. How push and pull could be integrated with the
LyX VCS interface needs to be discussed, but the implemented functionality was
quite straight forward.
The advantage of having this in LyX is the intelligent file name handling
of included files. Implementation as discussed on the list, but ensure also
that an attempt to use locked files fails.
This only move the code, but does not change the displayed labels.
Thus for numerical citation, the label is set to the cite number;
for author-year citation, the abbreviated list of authors is used
i.e. "Smith et al. 2001".
Eventually, we might want to make the label customizable, or get
it from BibTeX.
It is not possible to transport the different error/abort/success conditions
of a VCS checkin through a simple empty or nonempty string. Therefore it was
no wonder that the return values were used inconsistently for different VCS.
Now the log and return status are decoupled, and all VCS are handled
consistently. This is a prerequisite for proper locking detection of the
upcoming rename command.
This is a refactoring commit, which may introduce minor regressions, but should go in the right direction.
* introduce Parser::verbatimEnvironment, which is a wrapper around verbatimStuff
* introduce output_ert, which inserts a string as ERT; this allows to factor out several code instances.
* rename handle_ert to output_ert_inset: this creates an inset and then calls output_ert
* remove handle_comment (use handle_ert instead)
* remove handle_backaspace (unused now)
* use the new methods in handle_listing
TODO:
* use for other verbatim-like cases (Sweave chunk, url...)
* maybe implement Parser::unparse to reparse tokens already parsed with the wrong catcodes (if needed)
- Implement catcode setting in Parser
- add a new Parser::verbatimStuff method that reads verbatim contents
- use this method to parse "verbatim" environment.
- use it to parse \verb too.
- rename Parser::verbatimEnvironment to ertEnvironment.
TODO:
- use for other verbatim-like cases (Sweave chunk, lstlisting...)
- factor out the function that outputs ERT (including line breaks)
- maybe implement Parser::unparse (if needed)
* some functionality is in new modules now
new header locations and library names: QtConcurrent and QtWidgets
* method setResizeMode is renamed to setSectionResizeMode
* deprecated QAbstractItemModel::reset() is dropped now
* platform specific code like QApplication::syncX() is not common anymore
* QString::fromAscii() is dropped now
* some QDesktopServices methods has been moved to QStandardPaths
I do not see where this is really useful (contrary to real plaintext). And it breaks spell checking with hunspell and 8bit disctionaries (part of bug #8526)
Provide functions for translating to the LyX name
of an encoding from either a LaTeX name or an Iconv
name, with the possibility to specify the package.
This is in anticipation of changing to use the LyX
name of the encoding in the .lyx file format and
allowing multiple lib/encodings entries to have
the same LaTeX name (but different packages!).
The tex2lyx parser needs to worry about the iconv
name of the input encoding, so store that instead
of the latex name.
This bug was reported against the ubuntu build:
https://bugs.launchpad.net/bugs/1100046
Additionally, some extra code for avoiding double undo entries has
been removed, since this is handled by grouped undo now.
This patch implements 'move row' and 'move column' features for tabular.
The purpose is to provide a useful behavior in tabular that is
consistent with PARAGRAPH_MOVE_UP and PARAGRAPH_MOVE_DOWN so that the
user can, for example, do alt-<up> to move a row up. Alternatively,
icons for these features are also added to the table toolbar and
context menu.
If there is any selection, the feature is disabled. This is consistent
with how PARAGRAPH_MOVE_UP works in other contexts. Additionally, 'move
row' is disabled if there is a multi-row in the current or target row;
and 'move column' is disabled if there is a multi-column in the current
or target column.
'move row' moves only the left and right borders of a cell along with
the row. Similarly, 'move column' moves only the the top and bottom
borders.
Implementing similar functionality for other insets, such as arrays and
array environments, is on my TODO list.
Reordering citations is one case where catching "Citation undefined
on page ..." doesn't catch the need for a bibtex rerun. This patch
ensures the proper ordering is obtained in pdf output without having
to resort to closing and reopening the LyX document.