It is coming…

May 1, 2011

I joined KDE development in 2006, and joined the KDE PIM team soon after that, where my claim to fame has been EntityTreeModel and other related Qt models and proxies.

My development efforts have always been on the next generation, Akonadi based, Nepomuk using, full blown database depending, modular, portable beauty that will soon be known as KDE PIM 4.6.

After such a long time spent developing the new application suite, it will be great to finally pass the starting line of “Kontact 2″. This is of course just a new beginning for KDE PIM, a milestone in the application development lifecycle and a renewed focus on software that users actually have their hands on. For a long time it has made sense to fix bugs in KDE PIM 4.4 because that is what users had installed. With this release, the development effort turns fully to the future platform.

KDE PIM 4.6 is due to be released on June 7th alongside the 4.6.4 versions of the rest of the KDE application suite.

In terms of features, we think it is user ready and there have been many positive reports already from users of development versions. It is definitely not bug-free however. There are still some problematic communications with Exchange servers, and some resource usage spikes, but we are confident that this is overall a step forward. Users will be able to (mostly) downgrade to KDE PIM 4.4 if the 4.6 version does not meet expectations, but that is not a long-term solution so good bug reports will be required for a smoother experience going forward.

Making software releases is an interesting process requiring coordination of translation teams to ensure that the correct branch is used to generate translation files and produce translated software, promo teams to create a story of the release an understandable message and expectations, packaging teams working on getting the software into repositories for end users and ensuring on-going quality of existing installations, and even coordination of developers to get the remaining bugs resolved.

The packaging world is something I’ve been getting into lately so I can begin to understand what is involved with packages and distros more broadly. Technically KDE PIM is not going to be part of any 4.6 release of the KDE Application Suite, but released alongside it on the same day. We avoid disrupting the ecosystem and momentum of minor KDE releases with a major application release and ensure the ongoing quality of stable updates. Being able to rely on stability of point releases is an advantage to justifying making such releases available in -updates repos. KDE PIM should once again be part of the regular 4.7 release cycle though.

An elaborate joke?

April 6, 2011

I started writing Grantlee::Tubes some time in December 2010. In the course of writing it I’ve mostly been researching what the dependable API of QIODevice is. I don’t know when Tubes will get into a release of the Grantlee libraries, but it probably won’t be the next release. All of the classes don’t do error handling that they could do yet, and the QIODevice research is still ongoing.

Nevertheless I decided to start publishing blogs about Tubes in mid-March to give context to the April fools post about it.

Talk about building up a joke.

The library and concepts are real and useful though, so I’ll push on with publishing these experimental devices to the repo.

Consider a use case where you want to read from a QIODevice that is being written to by another class. For example, QTextDocumentWriter writes stream of valid ODF XML to a QIODevice and QXMLStreamReader reads a stream of XML data from a QIODevice.

How can we connect them together?

One way might be to use a QBuffer.

  QBuffer buffer;
  QTextDocumentWriter writer(&buffer);
  writer.write(document);

  buffer.seek(0);

  QXMLStreamReader reader(buffer);

  // ... Do something with the XML.

This works, but it’s not a generic solution. If we wanted to do asynchronous writing of data to the device and asynchronous line based reading from it, we would have to make the buffer a member of a class, and when reading from it we would have to do something like this:

  void MyClass::onReadyRead()
  {
    if (!m_buffer->canReadLine())
      return;
    m_buffer->seek(0);
    const QByteArray line = m_buffer->readLine();
    m_buffer->buffer.chop(line.length());
    m_buffer->seek(m_buffer->size());
    useLine(line);
  }

Reading from a buffer does not invalidate the data it holds. We have to use a method returning the QByteArray to chop off the part we read ourselves. We also have to remember to seek() a few times on the buffer. I didn’t even try the code out for off-by-ones.

Enter Grantlee::Channel.

Grantlee::Channel already made an appearance in my last post. The idea is to solve a connection problem with QIODevices. While Pump can transfer data from a device that should be read to a device that should be written to, Grantlee::Channel is an intermediary providing both a device that consumes data and one that produces data.

The difference between Grantlee::Pump and Grantlee::Channel

There are several significant differences between it and QBuffer. QBuffer is not a sequential device, but Channel is. That means that The pos() and seek() methods are relevant API when working with a QBuffer, but irrelevant and meaningless when working with a Channel. As an implementor of the QIODevice API that means I don’t have to implement any meaning into those virtual methods and can ignore them. Instead I can implement the readData and writeData methods to implement a FIFO semantic. The Channel can be written to at any time, and read from whenever it has data. There is no need for seek()ing, and it automatically discards data that has been read, meaning no manual memory conservation responsibility for the caller.

    QTextDocument *document = getDocument();

    Grantlee::Channel *channel = new Grantlee::Channel(this);
    channel->open(QIODevice::ReadWrite);

    // Write to the channel
    QTextDocumentWriter *writer = new QTextDocumentWriter(channel, this);

    // Read from the channel
    QXmlStreamReader reader(channel);

    // Use the reader.

Easy.

Another one bytes the dust

April 5, 2011

I’ve just pushed a change to kdepim 4.4 which removes this annoying dialog in a few annoying cases.

For users, this dialog was coming up, and not seeming to give any useful information, and when dismissed, the application was usable.

Showing the dialog was actually a bug that was fixed some time in February 2010 with improvements to the kdepim libraries, but because there was no KDEPIM applications 4.5 release, didn’t make it to users.

The fix was to make the applications not call dangerous API with sub-eventloops.

Making KDEPIM less annoying

April 4, 2011

I’ve started looking into KDEPIM 4.6 on Kubuntu Natty to see if it can be made less annoying to use. There are two unpopular dialogs which appear when using KDEPIM. Both are telling the user that essential PIM services are not fully operational.

The essential PIM services are Akonadi and Nepomuk. Akonadi provides access to all PIM data (emails, contacts, events etc) the user has. It is started automatically if using a PIM application like KMail2, KAddressBook, KOrganizer, KJots and more. There is no configuration option to turn Akonadi off. Akonadi is a cache server which uses a database like MySQL or SQLite to cache data.

Nepomuk provides indexing and searching capabilities to the PIM infrastructure. If you want to search your email, or use autocompletion when typing in email addresses, you need Nepomuk. These are currently considered essential features for a useful PIM stack, so Akonadi depends on Nepomuk being operational. Unfortunately it can be turned off, and when it is off, that’s when the user gets the two unpopular dialogs.

There may be a case for coming up with a unified framework for how services can depend on each other and give the user the opportunities to start essential dependent services. It might be something to discuss at the Platform 11 sprint.

However, there are things we can change in the short-term that can benefit the user. For one, I’ve turned one of the annoying dialogs into a passive notification using KNotification.

A notification is less annoying than a dialog

Next I’ll have to consider how to show the other annoying dialog only when attempting to search or autocomplete email addresses…

Grantlee::Thermodynamics and Refrigeration

April 1, 2011

With some new classes in Grantlee::Tubes starting to take shape, I started making use of them in the new Grantlee::Thermodynamics library with the Grantlee::Refridgeration system.

Grantlee::Refrigeration makes use of components from Grantlee::Tubes, like Grantlee::Pump, QtIOCompressor and Grantlee::Cat to create an ideal heat transfer system. The intention is to create a stream of “cold bytes” taking heat out of hot paths in your code, and disposing of the heat through your cooling fan.

Coolness. You can't get enough

Thermodynamics teaches us that while the quantity of energy in a closed system is constant, the quality of the energy (it’s entropy) is not. Entropy in a closed system is always increasing (the quality or useful energy in the universe is always going down), but we can locally decrease the entropy in a body of mass if we transfer it to another body of mass. This is what refrigeration is about. The decrease in entropy is made visible in the cold cavity of a fridge by the state change that water undergoes from fluid (higher entropy) to solid (lower entropy).

We can take heat (enthalpy and entropy) away from somewhere, but we have to dump that heat somewhere else. It takes work and a refrigerant to transfer heat between thermodynamic bodies. Heat won’t move spontaneously by itself (beyond the limits of equilibrium) so typically the work of heat transfer is done by a pump. Heat transfer in a refrigerator works in a cycle of four stages.

Grantlee already provides a Pump which we can use in our thermodynamic system, and we can use any refrigerant which is plentiful and which has a high capacity for entropy and a lot of hot air, such as a twitter feed.

We start by seeding the system with our refrigerant.

  QNetworkReply *reply = QNetworkAccessManager::get("http://www.twitter.com/feed");
  Grantlee::Cat *cat = new Grantlee::Cat;
  cat->appendSource(reply);

Cat will read the data from the reply object until it closes, at which point it will start to read from another device to close the cycle (shown later). At this point the data is already saturated; It can’t contain any more entropy at this temperature and pressure.

1 – 2: Reversible Adiabatic (Isentropic) Compression

Typically the first step described in a refrigeration cycle is Isentropic compression – that is, compressing the refrigerant without changing its entropy. The compression causes the data to become super-saturated. We compress the data by tubing the refrigerant through a QtIOCompressor.

  // (condenser shown later)
  Condenser *condenser = new Condenser;
  QtIOCompressor *compressor = new QtIOCompressor(condenser);
  cat->setTarget(compressor);

2 – 3: Constant Pressure Heat Rejection

After compression comes constant pressure heat rejection. As all developers know, constraints of constness can be expressed in C++ with the const keyword. So we require a class for rejecting the heat which will enforce that constraint:

  class Condenser : public QIODevice
  {
    ...  
  protected:
    void writeData( const char* data, qint64 len );
  };

Fortunately, the virtual writeData method of QIODevice already takes the data (our refrigerant) as const, so that constraint is already enforced for us. The condenser causes a change of state of the data, thereby decreasing its entropy. The data is once again saturated, but now in its low entropy state and at a lower temperature.

3 – 4: Adiabatic Throttling

We now have to connect up a throttle to perform isentropic expansion, so the entropy and the temperature is maintained, but the refrigerant changes state and becomes unsaturated.

A throttle is trivially implemented by using a QtIOCompressor in reverse, so we omit that for brevity.

At this point, we have our stream of cold bytes at a low temperature and unsaturated, so with a capacity to absorb some heat, so let’s do that with an evaporator.

4 – 1: Constant Pressure Heat Addition

We require that heat absorption occurs at constant pressure, and once again the const keyword ensures that.

  class Evaporator : public QIODevice
  {
    ...  
  protected:
    void writeData( const char* data, qint64 len );
  };

(The implementation of the Evaporator is left as an exercise for the reader)

We can then use the cold bytes in the method that defines our hot code path where we use an evaporator to facilitate the heat transfer to the refrigerant:

  void MyClass::hotPath(..., QIODevice *source, QIODevice *target)
  {

   // ...

    Evaporator *evaporator = new Evaporator;
    evaporator->setTarget(target);
    evaporator->setSource(source);
  }

The very presence of the evaporator in the hot path of our code is enough to cause heat transfer to the cold bytes, increasing their entropy by causing them to change state.

Of course this means that we need to call our hot path with the refrigerant tubing included:

  Grantlee::Channel *channel1 = new Grantlee::Channel;

  myInstance->hotPath(..., throttle, channel1);

  Grantlee::Pump *pump = new Grantlee::Pump;
  pump->setSource(channel1);

  Grantlee::Channel *channel2 = new Grantlee::Channel;
  pump->setTarget(channel2);

  cat->setSource(channel2);

As a result of the state change in the evaporator the data also becomes saturated at high entropy. Recall that this is the same state the refrigerant we originally introduced from twitter was in.

We route the refrigerant from the hot path and into a Grantlee::Pump, which provides the work required to satisfy the Second Law, and then forwarding the result on to cat, thereby closing the cycle.

Results

I ran some tests using the Grantlee Thermodynamics Toolkit on various algorithms with various parameters of compression and throughput, with results indicating a universal increase in performance when refrigeration was used.

QIODevice Cat is(-not)-a QIODevice

March 30, 2011

In some ways, cat is an opposite of tee. Whereas tee reads from a single source and writes to multiple, cat reads from multiple sources including optionally stdin, and writes to a single target, stdout.

Reading from stdin is actually a key difference between cat and echo, in that echo ignores stdin.

# Doesn't work (where's the echo?):
cat somefile | echo

However, in this example cat will uselessly write the input back to the output.

# Does work:
cat somefile | cat

Initially considered making Grantlee::Cat implement QIODevice so that it too could be written to. That would make it analogous to the unix command, but would make the implementation of Cat more complex. It would need to use a Grantlee::Channel and maybe a Grantlee::Reserviour internally (which I haven’t written yet) and wouldn’t really have any advantages that can’t be solved some other way.

Then I decided that I should make it implement QIODevice anyway, because it would make the title of this blog post funnier.

In the end though I decided that blog titles are not a good yardstick to measure the aptness of a design, so Grantlee::Cat is-a QObject instead.

Once I have Grantlee::Channel written it might actually make sense to be able to write to Cat and be easy to implement, so I might reconsider it then anyway.

Cat reads sequentially from a list of QIODevices and writes to a QIODevice target. The internal implementation is very simple. It uses a Grantlee::Pump pointing at the target, and sets the source of the pump to each source device in the list until the source device is closed.

So Cat is easy to use with readable QIODevices, but if you want to write to Cat (from multiple devices), that’s where Grantlee::Channel will come in.

Pumpin’ ain’t easy

March 23, 2011

An example in my last post used a Grantlee::Pump to transfer data from one QIODevice to another. I’ve just added the Pump class to the Grantlee::Tubes library.

Pumping tends to limits of capacity and drainage

QIODevice provides an asynchronous API for clients to use. A call to readAll() will return available data from the device, but with turning of time and the event loop, more data may eventually become available.

  QTcpSocket *socket = getSocket();
  QFile *logFile = getLogFile();

  // Read all data from the socket and write it to the log file as it becomes available.
  Grantlee::Pump *pump = new Grantlee::Pump(this);
  pump->setSource(socket);
  pump->setTarget(logFile);

The Pump encapsulates the handling of the readyRead() signal so that clients don’t need to do that themselves.

Actually, pumpin’ IS easy

I used this class just a short time ago as a debugging tool. Working on an embedded platform with only a single serial cable can be a significant constraint. While evaluating performance on the system I was attempting to run top -d 1 -b & and then starting the target application along with a command line interface to the application. The problem is that any attempt to make top run and record in the background failed. The command line interaction system seemed to conflict with proper execution of top, which simply terminated.

Enter QProcess with Grantlee::Pump.

The trick was to make QProcess run top instead of starting it over the serial connection. The output of top (batched) would be written out by QProcess. Of course, QProcess implements QIODevice, so all I needed to do was pump from the QProcess into a QFile:

int main(int argc, char **argv) 
{
  QProcess topProc;
  topProc.start(QLatin1String("top"), QStringList() << QString::fromLatin1("-d") << QString::fromLatin1("1") << QString::fromLatin1("-b"));

  QApplication app(argc, argv);

  QFile logFile(app.applicationDirPath() + QLatin1String("/topoutput"), this);
  logFile.open(QFile::WriteOnly);

  Grantlee::Pump pump;
  pump.setSource(topProc);
  pump.setTarget(logFile);

  int exitCode = app.exec();

  logFile.close();
  return exitCode;
}

Of course, this is equally possible without Grantlee::Pump. The class does not solve a hard problem, but it solves it in an object orientated way, making it easy to use and reuse as part of larger systems.

Pump takes care of the limiting rate of drainage from the source device. To handle the limiting capacity of the target will require a different Tube.

Tee is for Tubes

March 22, 2011

It is a curiosity that both existing Grantlee libraries begin with the letter ‘T’. This can not have been coincidence. I thought it best that I continue the trend and keep adding libraries whose name begins with ‘T’.

So was renamed the Grantlee Pipes library to Grantlee Tubes. Grantlee Tubes is a library of classes supporting the QIODevice API in an object-orientated way. The developer can connect a series of Tubes to achieve exacting goals in much the same way that the Unix programmer connects commands with pipes (‘|’). The first class to hit the public repo is, appropriately, Grantlee::Tee.

Trial and error

I encountered the tee command when I first started using Ubuntu and came across instructions like this to add repositories to the system:

echo "deb http://security.ubuntu.com/ubuntu jaunty-security main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list

I already knew about shell redirection by then so I had wondered why I couldn’t simply do this:

echo "deb http://security.ubuntu.com/ubuntu jaunty-security main restricted universe multiverse" >> /etc/apt/sources.list

Firstly, permission denied, so try Plan B

sudo echo "deb http://security.ubuntu.com/ubuntu jaunty-security main restricted universe multiverse" >> /etc/apt/sources.list

This doesn’t work because echo is run as superuser, but that permission does not cross the ‘>>’ boundary. It still attempts to write to the root-owned file as a normal user.

tee resolves that issue by reading from stdin and writing to standard out (like cat) AND to any specified targets.

Tee for Two

The name Tee comes from a plumbing term for a device which allows splitting a stream of fluid to two targets. The tee command allows duplicating a stream to N targets, allows quite a bit of versatility.

Tee: Duplicating targets

The GNU Coreutils manual is informative and illustrative in the use and use cases for tee. Primarily the advantages of its use are that it allows processing of streams which is efficient both in terms of memory use (the entire input does not need to be held in memory), and in terms of speed by enabling parallel processing.

 wget -O - http://example.com/dvd.iso \
       | tee dvd.iso | sha1sum > dvd.sha1 

Piping and Tubing

QIODevice is one of the most dominant interfaces in Qt. It is the base for most data reading and writing APIs such as QFile, QTcpSocket, QNetworkReply etc, and it is the interface used in most collaborators to data reading and writing, such as QDataStream, QTextStream, QTextDocumentWriter and more. This means for example that a QTextDocument may be written to a Tcp socket or a QProcess just as easily as it can be written to a file, from a Qt API point of view. The asynchronous nature of QIODevice adds to its versatility and suitability for many tasks around streaming data in the real world.

Just as we combine commands in a Unix shell with pipes, so too we want to be able to combine source and target QIODevices in efficient and familiar ways. It should be possible to implement something like this in Qt:

tardir=grantlee_src
     tar chof - "$tardir" \
       | tee >(gzip -9 -c > grantlee.tar.gz) \
       | bzip2 -9 -c > grantlee.tar.bz2

This is where Grantlee::Tubes comes in.

Tubular

Want to write large data to multiple files without multiple read/write cycles, holding all data in memory or file copying? Easy:

QIODevice *sourceDevice = getSourceData();
QFile *file1 = getFile("file1");
QFile *file2 = getFile("file2");
QFile *file3 = getFile("file3");

Grantlee::Tee *tee = new Grantlee::Tee(this);

tee->appendTarget(file1);
tee->appendTarget(file2);
tee->appendTarget(file3);

// Now write to Tee. readAll for demostration but is not
// memory efficient.
tee->write(sourceDevice->readAll());

Want to write to a log file while also writing to a QTcpSocket? Easy:

QIODevice *sourceDevice = getSourceData();
QTcpSocket *output = getOutput();
QFile *logFile = getLogfile();

Grantlee::Tee *tee = new Grantlee::Tee(this);

tee->appendTarget(output);
tee->appendTarget(logFile);

tee->write(sourceDevice->readAll());

An important point to grasp is that Grantlee::Tee implements QIODevice. That means that you can write to a Tee (and therefore multiple targets) just as easily as you can write to a single target.

This can be made easier (and asyncronous) using Grantlee::Pump:

tee->appendTarget(output);
tee->appendTarget(logFile);

// Replaced with Pump.
// tee->write(sourceDevice->readAll());

Grantlee::Pump *pump = new Grantlee::Pump(this);
pump->setSource(sourceDevice);
pump->setTarget(tee);

Grantlee::Pump listens to the readyRead signal and writes all available data in the source to the target until there’s no more left. That means that data can be fed into Tee in smaller chunks and it is not required to readAll data into memory up front.

Compression is equally possible. Here’s an example where a Grantlee::Template is rendered to a QIODevice which is piped through a compressor, and then a TcpSocket:

QTcpSocket *socket = getSocket();
Grantlee::Compressor *compressor = new Grantlee::Compressor(this);
compressor->setTarget(socket);

Template t = engine->newTemplate("main.html");
Context c = getContext();
// The string template is rendered to the compressor, which writes to the socket. 
t->render(compressor, c);

Down to a Tee

Several of these classes are already in various stages of being written, getting tests and documentation. For now I’ve added Tee to a volatile branch of Grantlee destined for some future release. Along with that all of the infrastructure for a third library in Grantlee has been added: documentation, CMake infrastructure, test framework etc. Grantlee::Tubes will continue growing in the coming days.

Gory technical details

March 20, 2011

In a previous post I wrote some details about how SFINAE works and provides type introspection, and how it can be used to make QVariant more effective at providing access to QObject derived classes. I submitted the patches to Qt where it got thoroughly reviewed.

One of the issues raised in review was that if the SFINAE template is used with a T which is only forward declared it does not work as expected.

  class MyObject;

  void fwdDeclared() {
    qDebug() << "fwdDeclared" << QTypeInfo<MyObject*>::isQObjectPointer;
  }

  class MyObject : public QObject 
  {
    Q_OBJECT
    // ...
  }

  void fullyDefined() {
    qDebug() << "fullyDefined" << QTypeInfo<MyObject*>::isQObjectPointer;
  }

  int main() {
    fwdDeclared();
    fullyDefined();
  }

...

$ ./test
fwdDeclared 0
fullyDefined 0

Why do we get 0 instead of one in both cases? MyClass clearly inherits QObject. Let’s try commenting out the body of fwdDeclared.

  void fwdDeclared() {
//    qDebug() << "fwdDeclared" << QTypeInfo<MyObject*>::isQObjectPointer;
  }
  
$ ./test
fullyDefined 1

Now it works. We get the expected 1 resulting from the isQObjectPointer being assigned a true. What’s going on?

It is often wise to avoid false negatives

At the point where the fwdDeclared function is defined, MyClass has only been forward declared – it is an incomplete type.

Recall that our SFINAE template depends on function overloading depending on the type of the argument to the template:

template<typename T>
struct QTypeInfo<T*>
{  
  yes_type check(QObject*);
  no_type check(...);
  enum { isQObjectPointer = sizeof(check(static_cast<T*>(0)) == sizeof(yes_type) };
};

The first time QTypeInfo<FirstType*>::isQObjectPointer is encountered is in fwdDeclared. At that point in the file, any FirstType* (such as the one in the SFINAE template) will be treated as a void pointer. That means that the catch-all check method which is not the QObject* overload will be used which returns a no_type, and the enum will be resolved to false.

This result as it is first defined will be used everywhere in the translation unit. QTypeInfo<FirstType*>::isQObjectPointer will be treated as an alias to false even after FirstType is fully defined. When we comment out the body of the fwdDeclared function, then the first time QTypeInfo<FirstType*>::isQObjectPointer is encountered is in the fullyDefined function so the base type can be seen by the compiler to be a QObject and the enum evaluates to true.

So what is a translation unit? Can’t we just consider not using them if they’re a problem?

Lost in translation

Translation is what a compiler does to turn source code into object code. A translation unit or compilation unit, is effectively the output of the preprocessor after resolving all the #if, #ifdef and #include etc directives. A forward declaration can appear multiple times in a translation unit, but a definition can only appear zero or one times. This is the one definition rule.

In the example above, the first time QTypeInfo<FirstType*>::isQObjectPointer is encountered it is evaluated to false, therefore if ODR is to be enforced within the translation unit, it must must be false throughout the entire translation unit.

QTypeInfo<FirstType*>::isQObjectPointer might even have a different value in different translation units depending on whether FirstType was defined or forward declared in each one, or the compiler might just spit it out if you try it, though GCC isn’t there yet.

The ODR is not necessarily enforced by the compiler. Causing the template trait to be evaluated differently would also conceivably be an ODR violation and it is undefined whether the compiler needs to do anything about that because compilers don’t necessarily have all the information to make that detection.

Relying on undefined behaviour would be dangerous.

You just have to deliberately the whole thing!

To bring this back to the QVariant/QMetaType context, the SFINAE template as it was before did function if the type T was forward declared, although it gave the wrong answer. Any of the QVariant functions that make use of the template require T to be a complete type anyway, but that still leaves the issues of maintainability (Qt developer in the future might use the trait in a way that doesn’t require the full type) and C++ header voyeurism (third party sees something in internal API that looks useful and uses it, with undefined, undebuggable results).

Both are valid issues of course, but we already know that using the trait with a forward declared T always gives the wrong answer simply because it doesn’t always give the right answer and might cause ODR violations. So we want to enforce that the type is fully defined.

The way to do that is to simply do use the type in a way that requires the full type to be known. One option would be to try to call a static method like connect() on the QObject that T is supposed to inherit. If a call to connect() can be made the type is fully defined. This fails to compile of course if T happens to not have a static connect() method in it’s interface. That option goes out the window as we do still need to compile if it is not a QObject.

Size *does* matter

Another language feature that requires the full type to be defined is the sizeof operator. sizeof(T*) == sizeof(void*) in most cases and that works even if T is forward declared. sizeof(T) however requires that T be fully defined. So all we need to do is invoke sizeof(T) somewhere in our SFINAE template and we’ll get a compiler error if it isn’t.

template<typename T>
struct QTypeInfo<T*>
{  
  yes_type check(QObject*);
  no_type check(...);
  enum { isQObjectPointer = sizeof(check(static_cast<T*>(0)) == sizeof(yes_type) + (0 * sizeof(T)) };
};

Consequence

Because we now require that the full type is known when evaluating the SFINAE template, we compromise the feature of automatic Q_PROPERTY handling. The translation unit that the .moc file is in does not necessarily contain a full definition for T so QTypeInfo<T*>::isQObjectPointer won’t necessarily compile and we can’t put it in .moc files.

The final twist in the tale is that we can’t even use QMetaType to store the information about whether a type is a QObject subclass anymore. In the previous patch that information was stored in the QMetaType data structures as a result of the qRegisterMetaType() call. However, it turns out that qRegisterMetaType also does not require the T to be a complete type. Using our SFINAE class inside that method imposes that as a new restriction which is source incompatible.

So instead of storing that information once per metatype in QMetaType, we have to store it in the QVariant as part of the data type stored in it. This turned out to be a better solution in the end anyway because it eliminates calls to QMetaType which require locking and unlocking a mutex.

It’s still not in Qt yet though, we’ll have to see if it makes it.

[Aside: A picture is worth a thousand words, and this post is now just over a thousand. I guess the picture is only worth the words if you know the words...]

ODR compromise

March 17, 2011


Follow

Get every new post delivered to your Inbox.