KDE PIM 4.6 RC2

May 31, 2011

Since my last blog, a few more bugs have been fixed in the KDE PIM 4.6 branch, so we’ve put out another RC release. This will likely become the final 4.6 version that gets tagged in a few days and released next week.

I’ve been trying to make clear that this is a usable, if not entirely bug free (or regression free) release. There is no need to fear it, but do try it. Returning to KDE PIM 4.4 may mean you lose trivial information like emails you marked read or important while using 4.6, but certainly not world-ending information loss that can never be recovered.

Still we do consider this a release worth switching to for many people. KDE PIM 4.6 will not fix all problems of all users of 4.4, but it puts us on track to get there.

Hysterical Raisins

May 30, 2011

I often see software developers who use Qt looking for existing solutions and libraries. Developers using Qt want IMAP libraries, a way to create Tar files, and more.

Many such cool classes and features are part of the KDE libraries, but are rarely considered by developers using Qt. To them it seems that KDE is bloated, or maybe that depending on KDE just for creation of Tar files is unrealistic. To a large extent that is false, as libraries from KDE come in granular form already. The kdecore libraries take up just 2.5 MB of space. The kdeui libraries consume 3.7 MB of space. For comparison, QtGui is an 11 MB binary. Parts of the KDE libraries already depend only on Qt, such as Solid, the hardware abstraction.

So it is true that we’re not doing at all badly already, but there are still areas we can improve. Part of that extra work is in communication (*cough*), packaging and build system improvements which have already been happening across the whole contributorship. It’s really great that I can run a command like

  debtree libkimap4 | dot -Tpng > kimap.png 

and get an image showing just how few dependencies there are these days for KIMAP.

Historical Reasons

However, there are still some interdependencies in the KDE libraries that may make adoption more tricky. You can’t have the plugin system or the asynchronous Job APIs without pulling in the localization system and the configuration system. This is the case largely for historical reasons and to some extent it is also possible to lessen that impact, but the currant state is that people who want to use parts of the KDE libraries fork the parts they want, and leave the rest.

I recently came across a word the Germans use to recognize situations like this in just a single word: Rosinenpickerei, translated as raisin picking. At first I thought, wow – The Germans have taken the concept of ‘Choosing the required/desired parts of a whole, and leaving the rest behind’ and squeezed it into a single word. Then I realized it was just like cherry-picking in English, so nothing to get all stereotypical about.

Turning it up to 11

Recognizing room for improvement, the KDE community organized a developer sprint meeting to focus on the platform and libraries right across the board. We’ve been running on 10 all the way up to now, and have come to the question of where can we go from here? We need that extra push over the cliff, so y’know what we do? We turn it up to 11. We have a in person face to face meeting for an entire week to tackle issues such as how we can improve Qt to make KDE software better. How we can improve KDE libraries to make the lives of Qt developers easier, and how we can improve the KDE platform to increase consistency among KDE applications.

That’s right,

Data fragmentation between ‘owning’ services?

May 17, 2011

I saw a snippet recently pointing to a concern about data fragmentation relating to Akonadi.

After talking with doctormo about it, the concern seems to be that there are a lot of services and applications trying to be the single standard point of access for PIM data.

From my standpoint (biased obviously) I don’t think it is a relevant issue for Akonadi using applications, because Akonadi is designed exactly to deal with the data fragmentation issue, by consuming PIM from other locations. For example, contact details from pidgin can now be made accessible in KAddressBook:

Pidgin and KAddressBook sharing data

Here’s how that happened:

After digging around I found that Elementary is an Ubuntu derivative which is making some new applications, such as postler, an email application. I couldn’t find any ‘backend’ for postler. It seems to talk directly to dexter, the addressbook application. If the two applications catch on, that would be additional places that your data would exist or need to be copied and possibly get out of sync, and could mean that dexter must be running in order for postler to have access to addressees. I couldn’t find a SCM repo to check if that is the case.

Postler and dexter don’t use evolution-data-server as a backend either, preferring to avoid it for whatever reason. Akonadi was likely never considered by the postler developers because they might think it depends on something fictional like the KDE desktop environment, which is clearly not the case. The point is though that non-adoption of common infrastructures between different applications is usually a social, communication and perception problem, but not a technical one. That can lead to the kinds of data fragmentation and inter-application dependencies which we saw between KDE 3 applications and which appear to exist between postler and dexter.

Those who have followed Akonadi development will know that PIM applications in KDE PIM 4.6.x do not own the data they show. KAddressBook does not need to be running or even installed in order for KMail or KOrganizer to access contact information for people. The reason is of course that KAddressBook does not own the data. Even Akonadi does not own it, but just provides access to it from third party data providers.

This is the primary distinction between data syncing between applications and data sourcing by applications. Syncing requires that one-to-one copies be made (and kept up to date) of ‘shared’ data between collaborating applications, possibly between differing formats or data representations and different transport mechanisms. However shared data allows many-to-many relations where all applications can access all data with zero knowledge of the other applications. Because no application owns the data, there is less work needed to prevent them getting out of sync with each other.

Akonadi is designed to address the problem of data fragmentation, by knowing the various sources of PIM data, and making all of it available to applications. The applications don’t need to know where Akonadi gets the data from. It only needs to get it from Akonadi and not concern itself with the true origin.

This means that if it came to it and people wanted to share data between dexter and Akonadi based applications, it would be possible. A somewhat more interesting project, for various reasons, from the GObject side at the moment is folks. Folks is designed to aggregate information about individual people between multiple different applications, comparable to the PIMO::Person concept KDE people might be more familiar with.

There are Qt bindings currently under parallel development with the GLib/QObject based library, which means that if PIM data from Gtk or GNOME based applications are made available to folks, making that same data available to KDE applications is as simple as writing a new Akonadi resource for it.

Because the Qt bindings implement the QtContacts API, a generic Akonadi resource for that API works for any implementation of it. So even the Akonadi backend for folks doesn’t have any direct knowledge of the folks system. I created a proof of concept for exactly that this week, allowing to access contacts from pidgin in KAddressBook. With the roadmap for Qt-folks, it could become more useful for data sharing going forward.

Thanks to Martin and to Travis for reviewing this post and providing feedback.

KDE PIM 4.6 RC1

May 15, 2011

The Kontact2 release train rolls on. Today we fixed the last of the release blocking bugs and tagged the release candidate.

It seems that many people around the Internet are as excited about the new release as we are. The message that this is not a bug-free release seems to be getting through. There will be bugs to annoy users, but the applications are usable and useful and they work without crashing and without losing data. The reports on my last post (which will of course be forgotten without non-duplicate bug reports :)) indicate normal-to-low severity bugs which would be expected in any Free Software, much less a release of a platform re-architecture and multi-application port and rewrite.

This version will have regressions compared to KDE PIM 4.4. There was never a goal to create a Kontact2 with zero regressions. The only goal was to create a working release. After that the work on making it perfect can begin. Division of resources between maintaining the 4.4 series and attempts to perfect the Kontact2 release was causing demotivation in the community. Making the release is the act that allows us to cross the starting line towards fixing the smaller issues.

Prepare to land!

So, onwards and onwards. The tagging of the release is due to happen on June 2nd before a release alongside with the 4.6.4 KDE SC, so all fixes and translations which are in the source tree get in :).

The next step: Coisceim

May 2, 2011

Using cloud services for your network and data gives away your control over your network and data. I read with interest about how Ars Techinca lost both temporarily, and told stories of how other people who don’t have the weight of a well known news website behind them to get it back. If you depend on cloud services for your network and data, they are not your network and data. You just have the privilege of being able to access them, which can be taken away for any or no reason. Usually the TOS of such services say as much. That doesn’t matter though as you didn’t read the TOS anyway.

Usually the main features of such services are access anywhere, aggregation, customized visualization and social features like uploading your picture and allowing everyone to consume your data. These are things which can be provided to some extent by locally installed applications too.

The new KDE PIM platform allows local aggregation of many different types of data. The aggregation is a feature in itself. As PIM data is part of the KDE PIM Platform, and not individual applications, this allows a departure from applications which focus on a single type of data or protocol (such as KMail -> email, KAddressBook ->contacts, KOrganizer -> events), to a focus on what the user is doing or trying to achieve.

In the old KDE PIM Platform applications owned the data and provided a scriptable access interface to it over D-Bus. In the new platform however, applications only provide a user interface to the data, and the data interface is provided by Akonadi. That makes the applications themselves far smaller, making it easy to split them up and create more purpose-built applications to fit with what the user wants. Newspeak centerward make easy newapplications indeed.

I’ve just pushed a new application as an example use of the KDE PIM Platform called coisceim, which is used for managing trips.

Using coisceim I can now associate a mail folder, calendar of todo items, and kjots book with an event, and track it as one semantic object, a trip. The video below shows the current UI (which obviously needs work :) ) being used to create a trip, mark todo items as done, and edit notes.


(MP4 version)

CoiscĂ©im, pronouced ‘Kush-came’, is an Irish word meaning footstep. I dropped the fada from the application name out of sympathy for those non-Gaelgoir gearheads.

Coisceim provides a single user interface for accessing mails, todo items and notes relevant to an event. In my day job I do some travel for consultancy and to deliver Qt trainings. The planning of such events usually includes emails back and forth about the content and time of the training and flight, car and hotel bookings. Alongside that, the training manager creates todo items which must be completed prior to the training such as printing the material, getting in touch with the customer etc. We handle those in KOrganizer (until now). Additionally, I need to make notes about who to contact when I arrive, the address of the hostel etc, and during the training I use notes to keep track of student queries so I can answer them the following day. A single focussed, coherent application for all of those aspects is intended to make this all easier to manage.

Interesting new user stories emerge with purpose-built applications on the new platform, such as being able to use a dedicated Plasma activity for planning a trip. In the video below I use a Plasma activity which is configured with a KJots plasmoid to edit a note which is already part of a trip planning. The plasmoid is used as a form of well organized clipboard in this case.


(MP4 version)

Because no application owns the data and there can be many applications capable of accessing the same data, editing the notes in one application updates them in all applications instantly. So when I edit the note in the KJots plasmoid, the note in coisceim is also updated.

Coisceim can also of course be embedded into Kontact, and there is a mobile variant using some components from the Kontact Touch suite.


(MP4 version)

In the KDE PIM community we have many ideas about other task-orientated applications we’d like to build, but we’d like to hear other ideas too. The ideas worked demonstrated at the last KDE PIM meeting like notetaker are also good examples of the kinds of directions we can go with PIM data visualization.

Writing coisceim, including the Kontact plugin and the mobile variant has taken a total of approximately 5 man-days. It is clearly not a complete application yet, but most people would appreciate that being able to turn an idea into a screencastable prototype in one week is certainly impressive and a credit to the platform. Further work on the application might involve making contacts part of the UI, using Nepomuk to automatically create associations of emails todos and notes to trips, make it prettier etc, but I don’t know if I’ll ever have time to do that. As a platform dude I’m more of an enabler really. I just made the app to show what the KDE PIM Platform is capable of, so if anyone else wants to take the reigns I’ll happily mentor.

What tasks do you do which cause you to switch between many PIM applications at the same time? What keeps you tied to the cloud (if you don’t like that :)) ?

It is coming…

May 1, 2011

I joined KDE development in 2006, and joined the KDE PIM team soon after that, where my claim to fame has been EntityTreeModel and other related Qt models and proxies.

My development efforts have always been on the next generation, Akonadi based, Nepomuk using, full blown database depending, modular, portable beauty that will soon be known as KDE PIM 4.6.

After such a long time spent developing the new application suite, it will be great to finally pass the starting line of “Kontact 2″. This is of course just a new beginning for KDE PIM, a milestone in the application development lifecycle and a renewed focus on software that users actually have their hands on. For a long time it has made sense to fix bugs in KDE PIM 4.4 because that is what users had installed. With this release, the development effort turns fully to the future platform.

KDE PIM 4.6 is due to be released on June 7th alongside the 4.6.4 versions of the rest of the KDE application suite.

In terms of features, we think it is user ready and there have been many positive reports already from users of development versions. It is definitely not bug-free however. There are still some problematic communications with Exchange servers, and some resource usage spikes, but we are confident that this is overall a step forward. Users will be able to (mostly) downgrade to KDE PIM 4.4 if the 4.6 version does not meet expectations, but that is not a long-term solution so good bug reports will be required for a smoother experience going forward.

Making software releases is an interesting process requiring coordination of translation teams to ensure that the correct branch is used to generate translation files and produce translated software, promo teams to create a story of the release an understandable message and expectations, packaging teams working on getting the software into repositories for end users and ensuring on-going quality of existing installations, and even coordination of developers to get the remaining bugs resolved.

The packaging world is something I’ve been getting into lately so I can begin to understand what is involved with packages and distros more broadly. Technically KDE PIM is not going to be part of any 4.6 release of the KDE Application Suite, but released alongside it on the same day. We avoid disrupting the ecosystem and momentum of minor KDE releases with a major application release and ensure the ongoing quality of stable updates. Being able to rely on stability of point releases is an advantage to justifying making such releases available in -updates repos. KDE PIM should once again be part of the regular 4.7 release cycle though.

An elaborate joke?

April 6, 2011

I started writing Grantlee::Tubes some time in December 2010. In the course of writing it I’ve mostly been researching what the dependable API of QIODevice is. I don’t know when Tubes will get into a release of the Grantlee libraries, but it probably won’t be the next release. All of the classes don’t do error handling that they could do yet, and the QIODevice research is still ongoing.

Nevertheless I decided to start publishing blogs about Tubes in mid-March to give context to the April fools post about it.

Talk about building up a joke.

The library and concepts are real and useful though, so I’ll push on with publishing these experimental devices to the repo.

Consider a use case where you want to read from a QIODevice that is being written to by another class. For example, QTextDocumentWriter writes stream of valid ODF XML to a QIODevice and QXMLStreamReader reads a stream of XML data from a QIODevice.

How can we connect them together?

One way might be to use a QBuffer.

  QBuffer buffer;
  QTextDocumentWriter writer(&buffer);
  writer.write(document);

  buffer.seek(0);

  QXMLStreamReader reader(buffer);

  // ... Do something with the XML.

This works, but it’s not a generic solution. If we wanted to do asynchronous writing of data to the device and asynchronous line based reading from it, we would have to make the buffer a member of a class, and when reading from it we would have to do something like this:

  void MyClass::onReadyRead()
  {
    if (!m_buffer->canReadLine())
      return;
    m_buffer->seek(0);
    const QByteArray line = m_buffer->readLine();
    m_buffer->buffer.chop(line.length());
    m_buffer->seek(m_buffer->size());
    useLine(line);
  }

Reading from a buffer does not invalidate the data it holds. We have to use a method returning the QByteArray to chop off the part we read ourselves. We also have to remember to seek() a few times on the buffer. I didn’t even try the code out for off-by-ones.

Enter Grantlee::Channel.

Grantlee::Channel already made an appearance in my last post. The idea is to solve a connection problem with QIODevices. While Pump can transfer data from a device that should be read to a device that should be written to, Grantlee::Channel is an intermediary providing both a device that consumes data and one that produces data.

The difference between Grantlee::Pump and Grantlee::Channel

There are several significant differences between it and QBuffer. QBuffer is not a sequential device, but Channel is. That means that The pos() and seek() methods are relevant API when working with a QBuffer, but irrelevant and meaningless when working with a Channel. As an implementor of the QIODevice API that means I don’t have to implement any meaning into those virtual methods and can ignore them. Instead I can implement the readData and writeData methods to implement a FIFO semantic. The Channel can be written to at any time, and read from whenever it has data. There is no need for seek()ing, and it automatically discards data that has been read, meaning no manual memory conservation responsibility for the caller.

    QTextDocument *document = getDocument();

    Grantlee::Channel *channel = new Grantlee::Channel(this);
    channel->open(QIODevice::ReadWrite);

    // Write to the channel
    QTextDocumentWriter *writer = new QTextDocumentWriter(channel, this);

    // Read from the channel
    QXmlStreamReader reader(channel);

    // Use the reader.

Easy.

Another one bytes the dust

April 5, 2011

I’ve just pushed a change to kdepim 4.4 which removes this annoying dialog in a few annoying cases.

For users, this dialog was coming up, and not seeming to give any useful information, and when dismissed, the application was usable.

Showing the dialog was actually a bug that was fixed some time in February 2010 with improvements to the kdepim libraries, but because there was no KDEPIM applications 4.5 release, didn’t make it to users.

The fix was to make the applications not call dangerous API with sub-eventloops.

Making KDEPIM less annoying

April 4, 2011

I’ve started looking into KDEPIM 4.6 on Kubuntu Natty to see if it can be made less annoying to use. There are two unpopular dialogs which appear when using KDEPIM. Both are telling the user that essential PIM services are not fully operational.

The essential PIM services are Akonadi and Nepomuk. Akonadi provides access to all PIM data (emails, contacts, events etc) the user has. It is started automatically if using a PIM application like KMail2, KAddressBook, KOrganizer, KJots and more. There is no configuration option to turn Akonadi off. Akonadi is a cache server which uses a database like MySQL or SQLite to cache data.

Nepomuk provides indexing and searching capabilities to the PIM infrastructure. If you want to search your email, or use autocompletion when typing in email addresses, you need Nepomuk. These are currently considered essential features for a useful PIM stack, so Akonadi depends on Nepomuk being operational. Unfortunately it can be turned off, and when it is off, that’s when the user gets the two unpopular dialogs.

There may be a case for coming up with a unified framework for how services can depend on each other and give the user the opportunities to start essential dependent services. It might be something to discuss at the Platform 11 sprint.

However, there are things we can change in the short-term that can benefit the user. For one, I’ve turned one of the annoying dialogs into a passive notification using KNotification.

A notification is less annoying than a dialog

Next I’ll have to consider how to show the other annoying dialog only when attempting to search or autocomplete email addresses…

Grantlee::Thermodynamics and Refrigeration

April 1, 2011

With some new classes in Grantlee::Tubes starting to take shape, I started making use of them in the new Grantlee::Thermodynamics library with the Grantlee::Refridgeration system.

Grantlee::Refrigeration makes use of components from Grantlee::Tubes, like Grantlee::Pump, QtIOCompressor and Grantlee::Cat to create an ideal heat transfer system. The intention is to create a stream of “cold bytes” taking heat out of hot paths in your code, and disposing of the heat through your cooling fan.

Coolness. You can't get enough

Thermodynamics teaches us that while the quantity of energy in a closed system is constant, the quality of the energy (it’s entropy) is not. Entropy in a closed system is always increasing (the quality or useful energy in the universe is always going down), but we can locally decrease the entropy in a body of mass if we transfer it to another body of mass. This is what refrigeration is about. The decrease in entropy is made visible in the cold cavity of a fridge by the state change that water undergoes from fluid (higher entropy) to solid (lower entropy).

We can take heat (enthalpy and entropy) away from somewhere, but we have to dump that heat somewhere else. It takes work and a refrigerant to transfer heat between thermodynamic bodies. Heat won’t move spontaneously by itself (beyond the limits of equilibrium) so typically the work of heat transfer is done by a pump. Heat transfer in a refrigerator works in a cycle of four stages.

Grantlee already provides a Pump which we can use in our thermodynamic system, and we can use any refrigerant which is plentiful and which has a high capacity for entropy and a lot of hot air, such as a twitter feed.

We start by seeding the system with our refrigerant.

  QNetworkReply *reply = QNetworkAccessManager::get("http://www.twitter.com/feed");
  Grantlee::Cat *cat = new Grantlee::Cat;
  cat->appendSource(reply);

Cat will read the data from the reply object until it closes, at which point it will start to read from another device to close the cycle (shown later). At this point the data is already saturated; It can’t contain any more entropy at this temperature and pressure.

1 – 2: Reversible Adiabatic (Isentropic) Compression

Typically the first step described in a refrigeration cycle is Isentropic compression – that is, compressing the refrigerant without changing its entropy. The compression causes the data to become super-saturated. We compress the data by tubing the refrigerant through a QtIOCompressor.

  // (condenser shown later)
  Condenser *condenser = new Condenser;
  QtIOCompressor *compressor = new QtIOCompressor(condenser);
  cat->setTarget(compressor);

2 – 3: Constant Pressure Heat Rejection

After compression comes constant pressure heat rejection. As all developers know, constraints of constness can be expressed in C++ with the const keyword. So we require a class for rejecting the heat which will enforce that constraint:

  class Condenser : public QIODevice
  {
    ...  
  protected:
    void writeData( const char* data, qint64 len );
  };

Fortunately, the virtual writeData method of QIODevice already takes the data (our refrigerant) as const, so that constraint is already enforced for us. The condenser causes a change of state of the data, thereby decreasing its entropy. The data is once again saturated, but now in its low entropy state and at a lower temperature.

3 – 4: Adiabatic Throttling

We now have to connect up a throttle to perform isentropic expansion, so the entropy and the temperature is maintained, but the refrigerant changes state and becomes unsaturated.

A throttle is trivially implemented by using a QtIOCompressor in reverse, so we omit that for brevity.

At this point, we have our stream of cold bytes at a low temperature and unsaturated, so with a capacity to absorb some heat, so let’s do that with an evaporator.

4 – 1: Constant Pressure Heat Addition

We require that heat absorption occurs at constant pressure, and once again the const keyword ensures that.

  class Evaporator : public QIODevice
  {
    ...  
  protected:
    void writeData( const char* data, qint64 len );
  };

(The implementation of the Evaporator is left as an exercise for the reader)

We can then use the cold bytes in the method that defines our hot code path where we use an evaporator to facilitate the heat transfer to the refrigerant:

  void MyClass::hotPath(..., QIODevice *source, QIODevice *target)
  {

   // ...

    Evaporator *evaporator = new Evaporator;
    evaporator->setTarget(target);
    evaporator->setSource(source);
  }

The very presence of the evaporator in the hot path of our code is enough to cause heat transfer to the cold bytes, increasing their entropy by causing them to change state.

Of course this means that we need to call our hot path with the refrigerant tubing included:

  Grantlee::Channel *channel1 = new Grantlee::Channel;

  myInstance->hotPath(..., throttle, channel1);

  Grantlee::Pump *pump = new Grantlee::Pump;
  pump->setSource(channel1);

  Grantlee::Channel *channel2 = new Grantlee::Channel;
  pump->setTarget(channel2);

  cat->setSource(channel2);

As a result of the state change in the evaporator the data also becomes saturated at high entropy. Recall that this is the same state the refrigerant we originally introduced from twitter was in.

We route the refrigerant from the hot path and into a Grantlee::Pump, which provides the work required to satisfy the Second Law, and then forwarding the result on to cat, thereby closing the cycle.

Results

I ran some tests using the Grantlee Thermodynamics Toolkit on various algorithms with various parameters of compression and throughput, with results indicating a universal increase in performance when refrigeration was used.


Follow

Get every new post delivered to your Inbox.