Archive for June, 2010

How to retrieve a QModelIndex for a custom data object

June 22, 2010

The QAbstractItemModel interface provides two things: structure and data.

The structure of the model is defined by how you implement the rowCount, columnCount, index and parent methods.

The data in represented in the model is accessible through the data method. data allows the use of custom roles to retrieve objects of custom types. Rows in the EntityTreeModel can represent things like emails or addressee objects.


Item item = model->data(index, ItemRole).value<Item>();
Message::Ptr email = item.payload<Message::Ptr>();

The nice thing about data is that it works even through proxy models which know nothing about the email types. So the model object in the example could be an EntityTreeModel, or any of many proxy models that we have these days.

But what if we already have the email, but we need to get a QModelIndex that represents it in the model? Reasons we need that include storing the open email folder on application close and restoring it on application start. (By the way: KViewStateSaver)

The model itself knows how to map between them, so the tempation would be to implement EntityTreeModel::indexForCollection. In fact, that’s exactly what CollectionModel, the predecessor to EntityTreeModel did. The reason that is a bad thing is that it only works on the base model. It doesn’t work through proxy models. If you use model-view flexibly in your API, you will most likely just have a QAbstractItemModel pointer at the point where you want to get the correct QModelIndex. You won’t know if it is the base model or a proxy model on top of the base model.

Someone might try to always have an instance of the base model around, use its indexForCollection method, and then try to map the result to the proxy at hand. That requires either putting the knowledge of the proxy chain in all methods that use indexForCollection (and maintaining that when another proxy is added or removed), or introspecting it using KModelIndexProxyMapper. Neither is a very good solution.

One way around that is to use the match() method. match allows queries for custom roles too, but it requires using Qt::RecursiveMatch, performing a linear search over the model. If proxies are used, the call is even more expensive.

At first I wrote a custom match implementation to forward the calls through proxies making that cheaper, and implemented it in the base model to do return a fast mapping. The problem is that it requires an arcane use of match to work, and that it doesn’t scale. Each proxy needs to have the custom match boilerplate, and if one proxy is added which does not have that implementation, the solution breaks down and I get bug reports.

The solution is only slightly more elegant.

Applications do know that when they deal with a QAbstractItemModel they are dealing with an abstraction of the base model. In Akonadi applications, they are either dealing with an EntityTreeModel or a proxy on top of a EntityTreeModel. By adding a static method to the EntityTreeModel, we can use a private implementation to create an index for the collection in the EntityTreeModel, and do all the mapping required to convert that index to an index in the proxy model.

That’s why EntityTreeModel has indexForCollection and indexesForItem as static methods.

However…

Static methods are bad for unit testing. If applications are using

idx = EntityTreeModel::indexForCollection(model, cId);

then the model at the base must be an EntityTreeModel. We can’t swap it out for a FakeEntityTreeModel for the purpose of unit testing. Well, actually with a bit more introspection I will be able to do just that, but then EntityTreeModel will have knowledge of the FakeEntityTreeModel. That is also not ideal, but it’s the best compromise so far to enable the back-modelling of objects to QModelIndexes.

Speaking of unit testing model-view, I will be speaking of unit testing model-view at Akademy this year.

Advertisement

KDE PIM promo multipliers

June 11, 2010

There is a lot of interesting and innovative development currently going on in KDE PIM. Over the last number of years we have been porting the 10 year old Kontact suite to the new and spiffy Akonadi and KDE PIM platform. The primary benefits of this are scalability, extensibility, reliability, maintainability and ubiquity. The bread and butter of software engineering.

Development of mobile versions of KDE PIM are advancing fast

At our Osnabrück meeting early this year we discussed promoting KDE PIM for the purposes of attracting new developers and interested users for testing. Some of the ideas from the brainstorming session were good ones and have been actioned. The key to getting the message out to a wide audience is multipliers.

Multipliers

The developers and others close to the KDE PIM community number in only a few tens of people, but we want to reach an audience of tens of thousands. We can put out messages about features and improvements in KDE PIM, but to reach the wider audience we need others to relay the message.

We’ve been putting the message out in more channels over the last few months, such as twitter and facebook, and have had successes in getting content in German into LinuxMagazin in print and online. Markus Feilner attended the Osnabrück meeting in January and wrote a short article and a longer article about it in German. More recently he has covered the developments in KDE PIM mobile, so that is a good channel that we can keep open.

Of course, the KDE promotion team and infrastructure is also a huge help. Regular KDE Software Compilation release announcements on the dot help spread awareness of the coming stable releases for KDE PIM, and the dot editors are helpful and flexible enough in scheduling to get our own articles onto the dot. The dot is a good multiplier itself too, because articles appearing there often appear on other sites such as LWN.

How you can help

Even more multiplication of the news of work that we’re doing in KDE PIM would help us reach more users and potential developers. Anyone can help us get the message out. Is it still worthwhile to submit stories to slashdot or digg? Can you do that for us? Many of us in the KDE PIM team don’t know what sites are relevant to post to these days. Should we be on Technorati? Should we be on delicious? Are we on delicious already? I have no idea.

If you use these websites and know how to submit stories, go for it! Be part of the KDE PIM promo. Join the game.

Make sure you let us know as well. We can let you know if we fixed 20 bugs today, if we make something 14% faster, give you screenshots, or anything else suitable for twittering about. It also means you get the information before anyone else :). The character limits on those kinds of sites don’t suit my writing style.

Of course if you want to write more than tweets or dents about KDE PIM, we can get your blog onto Planet KDE or Planet KDE PIM.

Who knows – it might even reach a national newspaper some day.

So, what are we doing?

The dependencies of KDE PIM have become more portable over recent years. Qt4 licensing changes made it easier to deploy Free Software on non-free operating systems. GPG4Win was developed and deployed bringing cryptographic features (and KDE technology) to Windows. More recently, extensive work has been done to port D-Bus to Windows and WinCE. The master branch in the D-Bus repository now contains the results of work that has been going on inside KDE and elsewhere in different branches and repositories. Future releases of D-Bus should be able to work just fine on Windows based systems.

This means that the entire KDE stack is portable in principle. In complicated programs, there is always going to be platform specific code which needs to be written, and in KDE PIM we have been making the PIM platform and applications work on Windows, MacOS, Maemo and WinCE just as well as they work on Linux. Those ports are in early alpha states of completion and stability, but work is ongoing. It’s an exciting time to be part of it. Future efforts innovating on top of Akonadi and Nepomuk could change the way we handle information, the web and the cloud, and we already have plenty of ideas.