Archive for the ‘KDE’ Category

Boost dependencies and bcp

August 21, 2016

Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating the images.

The monster diagram is here:

Edges and Incidental Modules and Packages

The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow.

In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all).

Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages.

As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged.

My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code.

Dependency Analysis and Reduction

One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything?

While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract.

Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here.

I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end:

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf boostdir/libs/icl/{doc,test,example}
$ rm -rf boostdir/boost/mpl/aux_/preprocessed
$ du -hs myicl/
15M     myicl/

Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large.

Why does ICL depend on ‘incidental spirit’? Can that dependency be removed?

For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge.

boost/libs/icl$ git grep -Pl -e include --and \
  -e "thread|spirit|pool|serial|date_time" include/

Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers.

I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them.

boost/libs/icl$ git grep -l gregorian include/
boost/libs/icl$ git grep -l ptime include/

As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set<double> and icl::interval_map<double>. So, I can simply delete those files.

boost/libs/icl$ git grep -l -e include \
  --and -e date_time include/boost/icl/ | xargs rm

and run the bcp tool again.

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf myicl/libs/icl/{doc,test,example}
$ rm -rf myicl/boost/mpl/aux_/preprocessed
$ du -hs myicl/
12M     myicl/

I’ve saved 3M just by understanding the dependencies a bit. Not bad!

Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves.

The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that?

include/boost/icl$ git grep -C2 -e include \
   --and -e boost/container
#   include <boost/container/set.hpp>
#   include <set>

#   include <boost/container/map.hpp>
#   include <boost/container/set.hpp>
#   include <map>

#   include <boost/container/set.hpp>
#   include <set>

So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common.

I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg:

index 6f3c851..cf22b91 100644
--- a/include/boost/icl/map.hpp
+++ b/include/boost/icl/map.hpp
@@ -12,12 +12,4 @@ Copyright (c) 2007-2011:
-#   include <boost/container/map.hpp>
-#   include <boost/container/set.hpp>
 #   include <map>
 #   include <set>
-#else // Default for implementing containers
-#   include <map>
-#   include <set>

This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph.

I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library.

You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go.

Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however.

In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of.

Examining Proposed Standard Libraries

We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost.

Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph.

Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency:

I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph.

Using a bcp-extracted library with Modern CMake

After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that:

add_library(boost_mpl INTERFACE)
target_compile_definitions(boost_mpl INTERFACE
target_include_directories(boost_mpl INTERFACE 

add_library(boost_icl INTERFACE)
target_link_libraries(boost_icl INTERFACE boost_mpl)
target_include_directories(boost_icl INTERFACE 
add_library(boost::icl ALIAS boost_icl)

Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro.

By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources.

MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries.

The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem:

target_link_libraries(myexe boost_icl)
target_link_libraries(myexe boost::icl)

Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library.

Opt-in header-only libraries with CMake

August 9, 2016

Using a C++ library, particularly a 3rd party one, can be complicated affair. Library binaries compiled on Windows/OSX/Linux can not simply be copied over to another platform and used there. Linking works differently, compilers bundle different code into binaries on each platform etc.

This is not an insurmountable problem. Libraries like Qt distribute dynamically compiled binaries for major platforms and other libraries have comparable solutions.

There is a category of libraries which considers the portable binaries issue to be a terminal one. Boost is a widespread source of many ‘header only’ libraries, which don’t require a user to link to a particular platform-compatible library binary. There are also many other examples of such ‘header only’ libraries.

Recently there was a blog post describing an example library which can be built as a shared library, or as a static library, or used directly as a ‘header only’ library which doesn’t require the user to link against anything to use the library. The claim is that it is useful for libraries to provide users the option of using a library as a ‘header only’ library and adding preprocessor magic to make that possible.

However, there is yet a fourth option, and that is for the consumer to compile the source files of the library themselves. This has the
advantage that the .cpp file is not #included into every compilation unit, but still avoids the platform-specific library binary.

I decided to write a CMake buildsystem which would achieve all of that for a library. I don’t have an opinion on whether good idea in general for libraries to do things like this, but if people want to do it, it should be easy as possible.

Additionally, of course, the CMake GenerateExportHeader module should be used, but I didn’t want to change the source from Vittorio so much.

The CMake code below compiles the library in several ways and installs it to a prefix which is suitable for packaging:

cmake_minimum_required(VERSION 3.3)


# define the library


add_library(library_static STATIC ${library_srcs})
add_library(library_shared SHARED ${library_srcs})

add_library(library_iface INTERFACE)

add_library(library_srcs INTERFACE)
target_sources(library_srcs INTERFACE

# install and export the library

  EXPORT library_targets
install(EXPORT library_targets
  NAMESPACE example_lib::
  DESTINATION lib/cmake/example_lib

install(FILES example_lib-config.cmake
  DESTINATION lib/cmake/example_lib

This blog post is not a CMake introduction, so to see what all of those commands are about start with the cmake-buildsystem and cmake-packages documentation.

There are 4 add_library calls. The first two serve the purpose of building the library as a shared library and then as a static library.

The next two are INTERFACE libraries, a concept I introduced in CMake 3.0 when it looked like Boost might use CMake. The INTERFACE target can be used to specify header-only libraries because they specify usage requirements for consumers to use, such as include directories and compile definitions.

The library_iface library functions as described in the blog post from Vittorio, in that users of that library will be built with LIBRARY_HEADER_ONLY and will therefore #include the .cpp files.

The library_srcs library causes the consumer to compile the .cpp files separately.

A consumer of a library like this would then look like:

cmake_minimum_required(VERSION 3.3)


find_package(example_lib REQUIRED)


## uncomment only one of these!
# target_link_libraries(myexe 
#     example_lib::library_static)
# target_link_libraries(myexe
#     example_lib::library_shared)
# target_link_libraries(myexe
#     example_lib::library_iface)

So, it is up to the consumer how they consume the library, and they determine that by using target_link_libraries to specify which one they depend on.

Generating Python Bindings with Clang

May 18, 2016

Python Bindings

Python provides a C API to define libraries which can be imported into a python environment. That python C interface is often used to provide bindings to existing libraries written in C++ and there are multiple existing technologies for that such as PyQt, PySide, Boost.Python and pybind.

pybind seems to be a more modern implementation of what Boost.Python provides. To create a binding, C++ code is written describing how the target API is to be exposed. That gives the flexibility of defining precisely which methods get exposed to the python environment and which do not.

PyQt and PySide are also similar in that they require the maintenance of a binding specification. In the case of PySide, that is an XML file, and in the case of PyQt that is a DSL which is similar to a C++ header. The advantage both of these systems have for Qt-based code is that they ship with bindings for Qt, relieving the binding author of the requirement to create those bindings.

PyKF5 application using KItemModels and KWidgetsAddons

PyKF5 application using KItemModels and KWidgetsAddons

Generated Python Bindings

For libraries which are large library collections, such as Qt5 and KDE Frameworks 5, manual binding specification soon becomes a task which is not suitable for manual creation by humans, so it is common to have the bindings generated from the C++ headers themselves.

The way that KDE libraries and bindings were organized in the time of KDE4 led to PyQt-based bindings being generated in a semi-automated process and then checked into the SCM repository and maintained there. The C++ header parsing tools used to do that were written before standardization of C++11 and have not kept pace with compilers adding new language features, and C++ headers using them.

Automatically Generated Python Bindings

It came as a surprise to me that no bindings had been completed for the KDE Frameworks 5 libraries. An email from Shaheed Hague about a fresh approach to generating bindings using clang looked promising, but he was hitting problems with linking binding code to the correct shared libraries and generally figuring out what the end-goal looks like. Having used clang APIs before, and having some experience with CMake, I decided to see what I could do to help.

Since then I’ve been helping get the bindings generator into something of a final form for the purposes of KDE Frameworks, and any other Qt-based or even non-Qt-based libraries. The binding generator uses the clang python cindex API to parse the headers of each library and generate a set of sip files, which are then processed to create the bindings. As the core concept of the generator is simply ‘use clang to parse the headers’ it can be adapted to other binding technologies in the future (such as PySide). PyQt-based bindings are the current focus because that fills a gap between what was provided with KDE4 and what is provided by KDE Frameworks 5.

All of that is internal though and doesn’t appear in the buildsystem of any framework. As far as each buildsystem is concerned, a single CMake macro is used to enable the build of python (2 and 3) bindings for a KDE Frameworks library:

    TARGET KF5::ItemModels
    MODULENAME KItemModels

Each of the headers in the library are parsed to create the bindings, meaning we can then write this code:

  #!/usr/bin/env python
  #-*- coding: utf-8 -*-

  import sys


  from PyQt5 import QtCore
  from PyQt5 import QtWidgets

  from PyKF5 import KItemModels

  app = QtWidgets.QApplication(sys.argv)

  stringListModel = QtCore.QStringListModel(
    ["Monday", "Tuesday", "Wednesday",
    "Thursday", "Friday", "Saturday", "Sunday"]);

  selectionProxy = KItemModels.KSelectionProxyModel()

  w = QtWidgets.QWidget()
  l = QtWidgets.QHBoxLayout(w)

  stringsView = QtWidgets.QTreeView()


  selectionView = QtWidgets.QTreeView()


and it just works with python 2 and 3.

Other libraries’ headers are more complex than KItemModels, so they have an extra rules file to maintain. The rules file is central to the design of this system in that it defines what to do when visiting each declaration in a C++ header file. It contains several small databases for handling declarations of containers, typedefs, methods and parameters, each of which may require special handling. The rules file for KCoreAddons is here.

The rules file contains entries to discard some method which can’t be called from python (in the case of heavily templated code for example) or it might replace the default implementation of the binding code with something else, in order to implement memory management correctly or for better integration with python built-in types.

Testing Automatically Generated Python Bindings

Each of the KDE Frameworks I’ve so far added bindings for gets a simple test file to verify that the binding can be loaded in the python interpreter (2 and 3). The TODO application in the screenshot is in the umbrella repo.

The binding generator itself also has tests to ensure that changes to it do not break generation for any framework. Actually extracting the important information using the cindex API is quite difficult and encounters many edge cases, like QStringLiteral (actually includes a lambda) in a parameter initializer.

Help Testing Automatically Generated Python Bindings

There is a call to action for anyone who wishes to help on the kde-bindings mailing list!

Grantlee v5.1.0 (Codename Außen hart und innen ganz weich) now available

April 19, 2016

The Grantlee community is pleased to announce the release of Grantlee version 5.1 (Mirror). Grantlee contains an implementation of the Django template system in Qt.

This release is binary and source compatible with the previous Qt5-based Grantlee release.

Following the pattern of naming Grantlee releases with German words and phrases I encounter, this release codename reflects the API being stable while the internals change a lot. Grantlee is “Allzeit bereit”, “einfach unerzetzlich” – it does everything “ganz ganz genau” :).

For the benefit of the uninitiated, Grantlee is a set of Qt based libraries including an advanced string template system in the style of the Django template system.

{# This is a simple template #}
{% for item in list %}
    {% if item.quantity == 0 %}
    We're out of {{ }}!
    {% endif %}
{% endfor %}

The showcase feature in this release is the introduction of ==, !=, <, >, <=, >= and in operators for the if tag. Django has had this feature for many years, but it was missing from Grantlee until now.

Also notable in this release is several changes from Daniel of Cutelyst. Daniel ported Grantlee from QRegExp to the more modern QRegularExpression, better error checking, and lots of prototyping of the if operators feature.

In order to accommodate CMake IDE generators (Visual Studio/Xcode) plugins built in debug mode now gain a ‘d‘ postfix in their name. Grantlee built in debug mode will first search for a plugin with a ‘d‘ postfix and fall back to a name without the postfix. This should make it possible to use a different generator to build Grantlee than is used to build 3rd-party plugins (eg, NMake and Visual Studio), and still be able to use the resulting binaries.

All of the work on making everything work well with all CMake generators was concurrent with making use of Travis and AppVeyor. All changes to Grantlee now have a large quality gate configuration matrix to pass. Of course, Qt handles all the platform abstractions needed here in the C++, but people build with many different configurations, CMake generators and platforms which need to continue to work.

All of the other changes since the last release are mostly clean-ups and minor modernizations of various kinds

  • Make it possible to build without QtScript and QtLinguistTools
  • clazy-discovered cleanups
  • Modernization
  • Bump the CMake requirement to version 3.1 and the Qt requirement to version 5.3

I’m giving an introductory talk later today about Grantlee.

How do you use Grantlee?

April 11, 2016

Grantlee has been out in the wild for quite some years now. Development stagnated for a while as I was concentrating on other things, but I’m updating it now to prepare for a new release.

I’m giving a talk about it in just over a week (the first since 2009), and I’m wondering how people are using it these days. The last time I really investigated this was 6 years ago.

I’m really interested in knowing about other users of Grantlee and other use-cases where it fits. Here are some of the places I’m already aware of Grantlee in use:

Many areas of KDE PIM use Grantlee for rendering content such as addressbook entries and rss feeds along with some gui editors for creating a new look. The qgitx tool also uses it for rendering commits in the view with a simple template.


It is also used in the Cutelyst web framework for generating html, templated emails and any other use-cases users of that framework have.

There is also rather advanced Grantlee integration available in KDevelop for new class generation, using the same principles I blogged about some years ago.

It is also used by the subsurface application for creating dive summaries and reports. Skrooge also uses it for report generation.

It is used in Oyranos buildsystem, seemingly to generate some of the code compiled into the application.

Also on the subject of generating things, it seems to be used in TexturePacker, a tool for game developers to create efficient assets for their games. Grantlee enables one of the core selling points of that software, enabling it to work with any game engine.

Others have contacted me about using Grantlee to generate documentation, or to generate unit tests to feed to another DSL. That’s not too far from how kitemmodels uses it to generate test cases for proxy model crashes.

Do you know of any other users or use-cases for Grantlee? Let us know in the comments!

Aaargh! AAA – Right Good and Hygenic

March 19, 2016

Almost Always Auto Advocates Advise Automatic Application

The idea of almost-always-auto has been around for a few years and has fairly divided the C++ community. The effective result is that concrete types appear rarely in code, and variables are declared as ‘auto’ instead.

Some of the reasons to use auto include

  • It’s correct – No accidental type conversions
  • It’s general – Hiding types encourages easier maintenance
  • It’s quieter – Less type noise is higher readability
  • It’s safe – You can’t forget to initialize it

I implemented an automated clang-based tool to port an existing codebase to use auto extensively, and I used it to automate the port of Qt.

Assessing Approved Alternative Alias

In applying these rules by default, declarations which break the rule stand out much more.  The reason for non-use of auto could be type conversion, which may be accidental or deliberate.  That can lead a human to assess why the rule was broken and perhaps make the code more explicit about that.

Advocates of using AAA suggest using auto by default everywhere it can be used. This results in the type being on the right hand side instead of the left side most of the time where it is written explicitly at all.

auto myString = getSomeString();
auto myDateTime = QDateTime{};
auto myWidget = static_cast<QWidget*>(getTabWidget());

Declarations which follow the AAA rules are much more uniform with each other as a result.

Part of the appeal is that it compares well with the cxx_alias_templates feature, which also puts the name on the left and the type on the right.

using GoodContainer = std::vector<int>;

One measure of maintainability is the amount of code which needs to be ported when something incidental changes. If we change a function return type, without changing how that return value is used, then client code does not need to be ported if we use auto. That is:

auto someWidget = someFunc(); // Returns Widget*

does not change, but remains:

auto someWidget = someFunc(); // Now returns unique_ptr!

under maintenance.

Axiom Awakens Alarmist Aversion

Of course, not everyone likes the AAA style, citing concerns such as needing to see the type in the code for readability and maintenance.

Hawk-eyed readers will recognize that ‘readability and maintenance’ is both a reason to use auto and a reason not to use auto.

I can only offer the tautology that people who think using auto does not make sense will not see any sense in using auto.

Ableton Applies Agreeable Advisories

For code that we write where I work, we take something of an almost-AAA approach, using auto everywhere that an = sign appears. So, we’re likely to write

auto s1 = getString();
QString s2;

instead of

auto s1 = getString();
auto s2 = QString{};

although I could imagine that changing some day.

Nevertheless, that is where I have had most of my exposure to extensive use of auto by default.  I liked it so must that I wanted to apply it everywhere.

Obviously I didn’t want to do such porting by hand (that would be likely to introduce bugs for one thing), so I looked for automated tools.

Accumulate Adjustments; Applying Acrobatics

The clang tidy tool only converts the most trivial of declarations.  For a Qt developer, the only effect it has is to convert use of new to assign to an auto variable of the same type.


QTabWidget* tabWidget = new QTabWidget;
QString s = getString();
int i = 7;
QVector<int>::const_iterator it = someVec.begin();
QStringList sl = getStringList();
QStringList middleStrings = sl.mid(1, 500);

will get converted to

auto tabWidget = new QTabWidget;
QString s = getString();
int i = 7;
QVector<int>::const_iterator it = someVec.begin();
QStringList sl = getStringList();
QStringList middleStrings = sl.mid(1, 500);

Qt containers, iterators and value types are not converted at all.

This is far too conservative for an advocate of auto.

So, I decided to create my own auto-modernizer.  I thought about adding it to the clang-tidy tool, however that does not build standalone. Instead it requires to be built as part of llvm and clang, making it too inconvenient at this point in development.

Instead I based my tool on the excellent clazy from Sergio.  That allows me to emit warnings where auto can be used, and FIX-ITs to automatically change the code.  The unit tests  showing before and after are most illustrative of what the tool does, to wit:

auto tabWidget = new QTabWidget;
auto s = getString();
auto i = 7;
auto it = someVec.begin();
auto sl = getStringList();
QStringList middleStrings = sl.mid(1, 500);

All opportunities to port to auto are taken, such that any remaining types in the code are the ones which stand out while reading.

The last line was not changed – do you know why? It is not a bug in the tool.

Adherence Abounds, Absent Accidents

I wrote the tool while at the same time using it on Grantlee to port those libraries to an AAA style.  This was very helpful in finding edge cases, and for learning a lot about the clang tooling API, which is very difficult for a newcomer.

I also used the tool to port Qt to AAA, which again helped iron out some bugs in the tool, and in my understanding of C++. Qt is not likely to make such heavy use of AAA, but it is useful for the port to exist for the purpose of discussion of what AAA really looks like at that scale.

After those exercises the tool seems to cover a good deal of typical Qt code.

The tool is safe for use with QStrings and the QStringBuilder expression template – it won’t replace a QString which triggers a conversion from QStringBuilder with a use of auto. It only ports declarations for which the type of the left hand side matches the type of the right hand side without conversion.

It can even issue warnings for locations where the tool is not able to introduce use of auto, for example because multiple different types are declared at once (and/or implicitly converted).

Apologies About Artistic Alliteration

I realize I’m breaking a sensible rule, but sometimes opportunities are too tempting!

CMake Daemon for user tools

January 24, 2016

I’ve been working for quite some time on a daemon mode for CMake in order to make it easier to build advanced tooling for CMake. I made a video about this today:

The general idea is that CMake is started as a long-running process, and can then be communicated with via a JSON protocol.

So, for example, a client sends a request like

  "type": "code_completion_at",
  "line": 50,
  "path": "/home/stephen/dev/src/cmake-browser/CMakeLists.txt",
  "column": 7

and the daemon responds with

Many more features are implemented such as semantic annotation, variable introspection, contextual help etc, all without the client having to implement it themselves.
Aside from the daemon, I implemented a Qt client making use of all of the features, and a Kate plugin to use the debugging features in that editor. This is the subject of my talk at FOSDEM, which I previewed in Berlin last week.
Come to my talk there to learn more!

Grantlee based on Qt 5

September 10, 2014

While I’ve been off the grid I received an unusually high concentration of requests for a Qt5-based Grantlee. I’m working on getting that done for September.

Grantlee 0.4.0 (codename ARM, aber Sexy) now available

November 28, 2013

The Grantlee community is pleased to announce the release of Grantlee version 0.4 (Mirror). Source and binary compatibility are maintained as with all previous releases.

This yearly custom is becoming an annual tradition. The most significant change in this release is a bump of the CMake version requirement. The required version is now at least CMake 2.8.9. This means that I can start using some more-recent features such as CMAKE_AUTOMOC and clean up all the code, but it isn’t high enough for the latest toys.

Grantlee is used in several places throughout KDE now. At least KDevelop, Skrooge, Rocs are using it, as well as several places in KDEPIM. The KDEPIM uses are also growing as Laurent finds excuses to use it :). I was also approached at Qt Developer Days this year by someone who uses Grantlee internally in his company for code generation. “What’s your connection to Grantlee?” – “Erm, I wrote it :)”.

As I’ve also been working on embedded projects lately, I also made sure Grantlee builds and works on the RaspberryPi. This is the first release which passes all tests on such ARM systems, because Qt treats floating point numbers differently there.

Share and Enjoy!

Grantlee 0.3.0 (codename Leistungswasser) now available

November 1, 2012

The Grantlee community is pleased to announce the release of Grantlee version 0.3 (Mirror). Source and binary compatibility are maintained as with all previous releases.

This release arrives almost a year after the last release and my previous blog post here. Meanwhile, I also created a new word, as commonly happens to anyone speaking German. When asking for ‘tap water’ (Leitungswasser) at a restaurant, I mistakenly asked for ‘performance water’ (Leistungswasser). When it’s water you need though, it might just as well be performance water.

As I noted in that last release, I had become interested in code coverage, and improving that was the focus of the 0.3.0 release. The some results can be seen comparing old with new in the core templates library for example, where I managed to bump line coverage to 100% in some more files, and made significant gains in others. The testcocoon tool was very useful for finding dead code too in places where I hadn’t expected it. There is still some low-hanging fruit there to increase the coverage stats, which will probably make the next release.

Also in this release is an implementation of the dictsort filter, special size and count properties on containers (eg You have {{ apples.size }} apples.).

In the textdocument library, a new patch from Laurent Montel makes it possible to generate item lists in plain text and html using Roman Numerals.