KDE PIM 4.6 RC1

May 15, 2011

The Kontact2 release train rolls on. Today we fixed the last of the release blocking bugs and tagged the release candidate.

It seems that many people around the Internet are as excited about the new release as we are. The message that this is not a bug-free release seems to be getting through. There will be bugs to annoy users, but the applications are usable and useful and they work without crashing and without losing data. The reports on my last post (which will of course be forgotten without non-duplicate bug reports :)) indicate normal-to-low severity bugs which would be expected in any Free Software, much less a release of a platform re-architecture and multi-application port and rewrite.

This version will have regressions compared to KDE PIM 4.4. There was never a goal to create a Kontact2 with zero regressions. The only goal was to create a working release. After that the work on making it perfect can begin. Division of resources between maintaining the 4.4 series and attempts to perfect the Kontact2 release was causing demotivation in the community. Making the release is the act that allows us to cross the starting line towards fixing the smaller issues.

Prepare to land!

So, onwards and onwards. The tagging of the release is due to happen on June 2nd before a release alongside with the 4.6.4 KDE SC, so all fixes and translations which are in the source tree get in :).

The next step: Coisceim

May 2, 2011

Using cloud services for your network and data gives away your control over your network and data. I read with interest about how Ars Techinca lost both temporarily, and told stories of how other people who don’t have the weight of a well known news website behind them to get it back. If you depend on cloud services for your network and data, they are not your network and data. You just have the privilege of being able to access them, which can be taken away for any or no reason. Usually the TOS of such services say as much. That doesn’t matter though as you didn’t read the TOS anyway.

Usually the main features of such services are access anywhere, aggregation, customized visualization and social features like uploading your picture and allowing everyone to consume your data. These are things which can be provided to some extent by locally installed applications too.

The new KDE PIM platform allows local aggregation of many different types of data. The aggregation is a feature in itself. As PIM data is part of the KDE PIM Platform, and not individual applications, this allows a departure from applications which focus on a single type of data or protocol (such as KMail -> email, KAddressBook ->contacts, KOrganizer -> events), to a focus on what the user is doing or trying to achieve.

In the old KDE PIM Platform applications owned the data and provided a scriptable access interface to it over D-Bus. In the new platform however, applications only provide a user interface to the data, and the data interface is provided by Akonadi. That makes the applications themselves far smaller, making it easy to split them up and create more purpose-built applications to fit with what the user wants. Newspeak centerward make easy newapplications indeed.

I’ve just pushed a new application as an example use of the KDE PIM Platform called coisceim, which is used for managing trips.

Using coisceim I can now associate a mail folder, calendar of todo items, and kjots book with an event, and track it as one semantic object, a trip. The video below shows the current UI (which obviously needs work :) ) being used to create a trip, mark todo items as done, and edit notes.


(MP4 version)

CoiscĂ©im, pronouced ‘Kush-came’, is an Irish word meaning footstep. I dropped the fada from the application name out of sympathy for those non-Gaelgoir gearheads.

Coisceim provides a single user interface for accessing mails, todo items and notes relevant to an event. In my day job I do some travel for consultancy and to deliver Qt trainings. The planning of such events usually includes emails back and forth about the content and time of the training and flight, car and hotel bookings. Alongside that, the training manager creates todo items which must be completed prior to the training such as printing the material, getting in touch with the customer etc. We handle those in KOrganizer (until now). Additionally, I need to make notes about who to contact when I arrive, the address of the hostel etc, and during the training I use notes to keep track of student queries so I can answer them the following day. A single focussed, coherent application for all of those aspects is intended to make this all easier to manage.

Interesting new user stories emerge with purpose-built applications on the new platform, such as being able to use a dedicated Plasma activity for planning a trip. In the video below I use a Plasma activity which is configured with a KJots plasmoid to edit a note which is already part of a trip planning. The plasmoid is used as a form of well organized clipboard in this case.


(MP4 version)

Because no application owns the data and there can be many applications capable of accessing the same data, editing the notes in one application updates them in all applications instantly. So when I edit the note in the KJots plasmoid, the note in coisceim is also updated.

Coisceim can also of course be embedded into Kontact, and there is a mobile variant using some components from the Kontact Touch suite.


(MP4 version)

In the KDE PIM community we have many ideas about other task-orientated applications we’d like to build, but we’d like to hear other ideas too. The ideas worked demonstrated at the last KDE PIM meeting like notetaker are also good examples of the kinds of directions we can go with PIM data visualization.

Writing coisceim, including the Kontact plugin and the mobile variant has taken a total of approximately 5 man-days. It is clearly not a complete application yet, but most people would appreciate that being able to turn an idea into a screencastable prototype in one week is certainly impressive and a credit to the platform. Further work on the application might involve making contacts part of the UI, using Nepomuk to automatically create associations of emails todos and notes to trips, make it prettier etc, but I don’t know if I’ll ever have time to do that. As a platform dude I’m more of an enabler really. I just made the app to show what the KDE PIM Platform is capable of, so if anyone else wants to take the reigns I’ll happily mentor.

What tasks do you do which cause you to switch between many PIM applications at the same time? What keeps you tied to the cloud (if you don’t like that :)) ?

It is coming…

May 1, 2011

I joined KDE development in 2006, and joined the KDE PIM team soon after that, where my claim to fame has been EntityTreeModel and other related Qt models and proxies.

My development efforts have always been on the next generation, Akonadi based, Nepomuk using, full blown database depending, modular, portable beauty that will soon be known as KDE PIM 4.6.

After such a long time spent developing the new application suite, it will be great to finally pass the starting line of “Kontact 2″. This is of course just a new beginning for KDE PIM, a milestone in the application development lifecycle and a renewed focus on software that users actually have their hands on. For a long time it has made sense to fix bugs in KDE PIM 4.4 because that is what users had installed. With this release, the development effort turns fully to the future platform.

KDE PIM 4.6 is due to be released on June 7th alongside the 4.6.4 versions of the rest of the KDE application suite.

In terms of features, we think it is user ready and there have been many positive reports already from users of development versions. It is definitely not bug-free however. There are still some problematic communications with Exchange servers, and some resource usage spikes, but we are confident that this is overall a step forward. Users will be able to (mostly) downgrade to KDE PIM 4.4 if the 4.6 version does not meet expectations, but that is not a long-term solution so good bug reports will be required for a smoother experience going forward.

Making software releases is an interesting process requiring coordination of translation teams to ensure that the correct branch is used to generate translation files and produce translated software, promo teams to create a story of the release an understandable message and expectations, packaging teams working on getting the software into repositories for end users and ensuring on-going quality of existing installations, and even coordination of developers to get the remaining bugs resolved.

The packaging world is something I’ve been getting into lately so I can begin to understand what is involved with packages and distros more broadly. Technically KDE PIM is not going to be part of any 4.6 release of the KDE Application Suite, but released alongside it on the same day. We avoid disrupting the ecosystem and momentum of minor KDE releases with a major application release and ensure the ongoing quality of stable updates. Being able to rely on stability of point releases is an advantage to justifying making such releases available in -updates repos. KDE PIM should once again be part of the regular 4.7 release cycle though.

An elaborate joke?

April 6, 2011

I started writing Grantlee::Tubes some time in December 2010. In the course of writing it I’ve mostly been researching what the dependable API of QIODevice is. I don’t know when Tubes will get into a release of the Grantlee libraries, but it probably won’t be the next release. All of the classes don’t do error handling that they could do yet, and the QIODevice research is still ongoing.

Nevertheless I decided to start publishing blogs about Tubes in mid-March to give context to the April fools post about it.

Talk about building up a joke.

The library and concepts are real and useful though, so I’ll push on with publishing these experimental devices to the repo.

Consider a use case where you want to read from a QIODevice that is being written to by another class. For example, QTextDocumentWriter writes stream of valid ODF XML to a QIODevice and QXMLStreamReader reads a stream of XML data from a QIODevice.

How can we connect them together?

One way might be to use a QBuffer.

  QBuffer buffer;
  QTextDocumentWriter writer(&buffer);
  writer.write(document);

  buffer.seek(0);

  QXMLStreamReader reader(buffer);

  // ... Do something with the XML.

This works, but it’s not a generic solution. If we wanted to do asynchronous writing of data to the device and asynchronous line based reading from it, we would have to make the buffer a member of a class, and when reading from it we would have to do something like this:

  void MyClass::onReadyRead()
  {
    if (!m_buffer->canReadLine())
      return;
    m_buffer->seek(0);
    const QByteArray line = m_buffer->readLine();
    m_buffer->buffer.chop(line.length());
    m_buffer->seek(m_buffer->size());
    useLine(line);
  }

Reading from a buffer does not invalidate the data it holds. We have to use a method returning the QByteArray to chop off the part we read ourselves. We also have to remember to seek() a few times on the buffer. I didn’t even try the code out for off-by-ones.

Enter Grantlee::Channel.

Grantlee::Channel already made an appearance in my last post. The idea is to solve a connection problem with QIODevices. While Pump can transfer data from a device that should be read to a device that should be written to, Grantlee::Channel is an intermediary providing both a device that consumes data and one that produces data.

The difference between Grantlee::Pump and Grantlee::Channel

There are several significant differences between it and QBuffer. QBuffer is not a sequential device, but Channel is. That means that The pos() and seek() methods are relevant API when working with a QBuffer, but irrelevant and meaningless when working with a Channel. As an implementor of the QIODevice API that means I don’t have to implement any meaning into those virtual methods and can ignore them. Instead I can implement the readData and writeData methods to implement a FIFO semantic. The Channel can be written to at any time, and read from whenever it has data. There is no need for seek()ing, and it automatically discards data that has been read, meaning no manual memory conservation responsibility for the caller.

    QTextDocument *document = getDocument();

    Grantlee::Channel *channel = new Grantlee::Channel(this);
    channel->open(QIODevice::ReadWrite);

    // Write to the channel
    QTextDocumentWriter *writer = new QTextDocumentWriter(channel, this);

    // Read from the channel
    QXmlStreamReader reader(channel);

    // Use the reader.

Easy.

Another one bytes the dust

April 5, 2011

I’ve just pushed a change to kdepim 4.4 which removes this annoying dialog in a few annoying cases.

For users, this dialog was coming up, and not seeming to give any useful information, and when dismissed, the application was usable.

Showing the dialog was actually a bug that was fixed some time in February 2010 with improvements to the kdepim libraries, but because there was no KDEPIM applications 4.5 release, didn’t make it to users.

The fix was to make the applications not call dangerous API with sub-eventloops.

Making KDEPIM less annoying

April 4, 2011

I’ve started looking into KDEPIM 4.6 on Kubuntu Natty to see if it can be made less annoying to use. There are two unpopular dialogs which appear when using KDEPIM. Both are telling the user that essential PIM services are not fully operational.

The essential PIM services are Akonadi and Nepomuk. Akonadi provides access to all PIM data (emails, contacts, events etc) the user has. It is started automatically if using a PIM application like KMail2, KAddressBook, KOrganizer, KJots and more. There is no configuration option to turn Akonadi off. Akonadi is a cache server which uses a database like MySQL or SQLite to cache data.

Nepomuk provides indexing and searching capabilities to the PIM infrastructure. If you want to search your email, or use autocompletion when typing in email addresses, you need Nepomuk. These are currently considered essential features for a useful PIM stack, so Akonadi depends on Nepomuk being operational. Unfortunately it can be turned off, and when it is off, that’s when the user gets the two unpopular dialogs.

There may be a case for coming up with a unified framework for how services can depend on each other and give the user the opportunities to start essential dependent services. It might be something to discuss at the Platform 11 sprint.

However, there are things we can change in the short-term that can benefit the user. For one, I’ve turned one of the annoying dialogs into a passive notification using KNotification.

A notification is less annoying than a dialog

Next I’ll have to consider how to show the other annoying dialog only when attempting to search or autocomplete email addresses…

Grantlee::Thermodynamics and Refrigeration

April 1, 2011

With some new classes in Grantlee::Tubes starting to take shape, I started making use of them in the new Grantlee::Thermodynamics library with the Grantlee::Refridgeration system.

Grantlee::Refrigeration makes use of components from Grantlee::Tubes, like Grantlee::Pump, QtIOCompressor and Grantlee::Cat to create an ideal heat transfer system. The intention is to create a stream of “cold bytes” taking heat out of hot paths in your code, and disposing of the heat through your cooling fan.

Coolness. You can't get enough

Thermodynamics teaches us that while the quantity of energy in a closed system is constant, the quality of the energy (it’s entropy) is not. Entropy in a closed system is always increasing (the quality or useful energy in the universe is always going down), but we can locally decrease the entropy in a body of mass if we transfer it to another body of mass. This is what refrigeration is about. The decrease in entropy is made visible in the cold cavity of a fridge by the state change that water undergoes from fluid (higher entropy) to solid (lower entropy).

We can take heat (enthalpy and entropy) away from somewhere, but we have to dump that heat somewhere else. It takes work and a refrigerant to transfer heat between thermodynamic bodies. Heat won’t move spontaneously by itself (beyond the limits of equilibrium) so typically the work of heat transfer is done by a pump. Heat transfer in a refrigerator works in a cycle of four stages.

Grantlee already provides a Pump which we can use in our thermodynamic system, and we can use any refrigerant which is plentiful and which has a high capacity for entropy and a lot of hot air, such as a twitter feed.

We start by seeding the system with our refrigerant.

  QNetworkReply *reply = QNetworkAccessManager::get("http://www.twitter.com/feed");
  Grantlee::Cat *cat = new Grantlee::Cat;
  cat->appendSource(reply);

Cat will read the data from the reply object until it closes, at which point it will start to read from another device to close the cycle (shown later). At this point the data is already saturated; It can’t contain any more entropy at this temperature and pressure.

1 – 2: Reversible Adiabatic (Isentropic) Compression

Typically the first step described in a refrigeration cycle is Isentropic compression – that is, compressing the refrigerant without changing its entropy. The compression causes the data to become super-saturated. We compress the data by tubing the refrigerant through a QtIOCompressor.

  // (condenser shown later)
  Condenser *condenser = new Condenser;
  QtIOCompressor *compressor = new QtIOCompressor(condenser);
  cat->setTarget(compressor);

2 – 3: Constant Pressure Heat Rejection

After compression comes constant pressure heat rejection. As all developers know, constraints of constness can be expressed in C++ with the const keyword. So we require a class for rejecting the heat which will enforce that constraint:

  class Condenser : public QIODevice
  {
    ...  
  protected:
    void writeData( const char* data, qint64 len );
  };

Fortunately, the virtual writeData method of QIODevice already takes the data (our refrigerant) as const, so that constraint is already enforced for us. The condenser causes a change of state of the data, thereby decreasing its entropy. The data is once again saturated, but now in its low entropy state and at a lower temperature.

3 – 4: Adiabatic Throttling

We now have to connect up a throttle to perform isentropic expansion, so the entropy and the temperature is maintained, but the refrigerant changes state and becomes unsaturated.

A throttle is trivially implemented by using a QtIOCompressor in reverse, so we omit that for brevity.

At this point, we have our stream of cold bytes at a low temperature and unsaturated, so with a capacity to absorb some heat, so let’s do that with an evaporator.

4 – 1: Constant Pressure Heat Addition

We require that heat absorption occurs at constant pressure, and once again the const keyword ensures that.

  class Evaporator : public QIODevice
  {
    ...  
  protected:
    void writeData( const char* data, qint64 len );
  };

(The implementation of the Evaporator is left as an exercise for the reader)

We can then use the cold bytes in the method that defines our hot code path where we use an evaporator to facilitate the heat transfer to the refrigerant:

  void MyClass::hotPath(..., QIODevice *source, QIODevice *target)
  {

   // ...

    Evaporator *evaporator = new Evaporator;
    evaporator->setTarget(target);
    evaporator->setSource(source);
  }

The very presence of the evaporator in the hot path of our code is enough to cause heat transfer to the cold bytes, increasing their entropy by causing them to change state.

Of course this means that we need to call our hot path with the refrigerant tubing included:

  Grantlee::Channel *channel1 = new Grantlee::Channel;

  myInstance->hotPath(..., throttle, channel1);

  Grantlee::Pump *pump = new Grantlee::Pump;
  pump->setSource(channel1);

  Grantlee::Channel *channel2 = new Grantlee::Channel;
  pump->setTarget(channel2);

  cat->setSource(channel2);

As a result of the state change in the evaporator the data also becomes saturated at high entropy. Recall that this is the same state the refrigerant we originally introduced from twitter was in.

We route the refrigerant from the hot path and into a Grantlee::Pump, which provides the work required to satisfy the Second Law, and then forwarding the result on to cat, thereby closing the cycle.

Results

I ran some tests using the Grantlee Thermodynamics Toolkit on various algorithms with various parameters of compression and throughput, with results indicating a universal increase in performance when refrigeration was used.

QIODevice Cat is(-not)-a QIODevice

March 30, 2011

In some ways, cat is an opposite of tee. Whereas tee reads from a single source and writes to multiple, cat reads from multiple sources including optionally stdin, and writes to a single target, stdout.

Reading from stdin is actually a key difference between cat and echo, in that echo ignores stdin.

# Doesn't work (where's the echo?):
cat somefile | echo

However, in this example cat will uselessly write the input back to the output.

# Does work:
cat somefile | cat

Initially considered making Grantlee::Cat implement QIODevice so that it too could be written to. That would make it analogous to the unix command, but would make the implementation of Cat more complex. It would need to use a Grantlee::Channel and maybe a Grantlee::Reserviour internally (which I haven’t written yet) and wouldn’t really have any advantages that can’t be solved some other way.

Then I decided that I should make it implement QIODevice anyway, because it would make the title of this blog post funnier.

In the end though I decided that blog titles are not a good yardstick to measure the aptness of a design, so Grantlee::Cat is-a QObject instead.

Once I have Grantlee::Channel written it might actually make sense to be able to write to Cat and be easy to implement, so I might reconsider it then anyway.

Cat reads sequentially from a list of QIODevices and writes to a QIODevice target. The internal implementation is very simple. It uses a Grantlee::Pump pointing at the target, and sets the source of the pump to each source device in the list until the source device is closed.

So Cat is easy to use with readable QIODevices, but if you want to write to Cat (from multiple devices), that’s where Grantlee::Channel will come in.

Pumpin’ ain’t easy

March 23, 2011

An example in my last post used a Grantlee::Pump to transfer data from one QIODevice to another. I’ve just added the Pump class to the Grantlee::Tubes library.

Pumping tends to limits of capacity and drainage

QIODevice provides an asynchronous API for clients to use. A call to readAll() will return available data from the device, but with turning of time and the event loop, more data may eventually become available.

  QTcpSocket *socket = getSocket();
  QFile *logFile = getLogFile();

  // Read all data from the socket and write it to the log file as it becomes available.
  Grantlee::Pump *pump = new Grantlee::Pump(this);
  pump->setSource(socket);
  pump->setTarget(logFile);

The Pump encapsulates the handling of the readyRead() signal so that clients don’t need to do that themselves.

Actually, pumpin’ IS easy

I used this class just a short time ago as a debugging tool. Working on an embedded platform with only a single serial cable can be a significant constraint. While evaluating performance on the system I was attempting to run top -d 1 -b & and then starting the target application along with a command line interface to the application. The problem is that any attempt to make top run and record in the background failed. The command line interaction system seemed to conflict with proper execution of top, which simply terminated.

Enter QProcess with Grantlee::Pump.

The trick was to make QProcess run top instead of starting it over the serial connection. The output of top (batched) would be written out by QProcess. Of course, QProcess implements QIODevice, so all I needed to do was pump from the QProcess into a QFile:

int main(int argc, char **argv) 
{
  QProcess topProc;
  topProc.start(QLatin1String("top"), QStringList() << QString::fromLatin1("-d") << QString::fromLatin1("1") << QString::fromLatin1("-b"));

  QApplication app(argc, argv);

  QFile logFile(app.applicationDirPath() + QLatin1String("/topoutput"), this);
  logFile.open(QFile::WriteOnly);

  Grantlee::Pump pump;
  pump.setSource(topProc);
  pump.setTarget(logFile);

  int exitCode = app.exec();

  logFile.close();
  return exitCode;
}

Of course, this is equally possible without Grantlee::Pump. The class does not solve a hard problem, but it solves it in an object orientated way, making it easy to use and reuse as part of larger systems.

Pump takes care of the limiting rate of drainage from the source device. To handle the limiting capacity of the target will require a different Tube.

Tee is for Tubes

March 22, 2011

It is a curiosity that both existing Grantlee libraries begin with the letter ‘T’. This can not have been coincidence. I thought it best that I continue the trend and keep adding libraries whose name begins with ‘T’.

So was renamed the Grantlee Pipes library to Grantlee Tubes. Grantlee Tubes is a library of classes supporting the QIODevice API in an object-orientated way. The developer can connect a series of Tubes to achieve exacting goals in much the same way that the Unix programmer connects commands with pipes (‘|’). The first class to hit the public repo is, appropriately, Grantlee::Tee.

Trial and error

I encountered the tee command when I first started using Ubuntu and came across instructions like this to add repositories to the system:

echo "deb http://security.ubuntu.com/ubuntu jaunty-security main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list

I already knew about shell redirection by then so I had wondered why I couldn’t simply do this:

echo "deb http://security.ubuntu.com/ubuntu jaunty-security main restricted universe multiverse" >> /etc/apt/sources.list

Firstly, permission denied, so try Plan B

sudo echo "deb http://security.ubuntu.com/ubuntu jaunty-security main restricted universe multiverse" >> /etc/apt/sources.list

This doesn’t work because echo is run as superuser, but that permission does not cross the ‘>>’ boundary. It still attempts to write to the root-owned file as a normal user.

tee resolves that issue by reading from stdin and writing to standard out (like cat) AND to any specified targets.

Tee for Two

The name Tee comes from a plumbing term for a device which allows splitting a stream of fluid to two targets. The tee command allows duplicating a stream to N targets, allows quite a bit of versatility.

Tee: Duplicating targets

The GNU Coreutils manual is informative and illustrative in the use and use cases for tee. Primarily the advantages of its use are that it allows processing of streams which is efficient both in terms of memory use (the entire input does not need to be held in memory), and in terms of speed by enabling parallel processing.

 wget -O - http://example.com/dvd.iso \
       | tee dvd.iso | sha1sum > dvd.sha1 

Piping and Tubing

QIODevice is one of the most dominant interfaces in Qt. It is the base for most data reading and writing APIs such as QFile, QTcpSocket, QNetworkReply etc, and it is the interface used in most collaborators to data reading and writing, such as QDataStream, QTextStream, QTextDocumentWriter and more. This means for example that a QTextDocument may be written to a Tcp socket or a QProcess just as easily as it can be written to a file, from a Qt API point of view. The asynchronous nature of QIODevice adds to its versatility and suitability for many tasks around streaming data in the real world.

Just as we combine commands in a Unix shell with pipes, so too we want to be able to combine source and target QIODevices in efficient and familiar ways. It should be possible to implement something like this in Qt:

tardir=grantlee_src
     tar chof - "$tardir" \
       | tee >(gzip -9 -c > grantlee.tar.gz) \
       | bzip2 -9 -c > grantlee.tar.bz2

This is where Grantlee::Tubes comes in.

Tubular

Want to write large data to multiple files without multiple read/write cycles, holding all data in memory or file copying? Easy:

QIODevice *sourceDevice = getSourceData();
QFile *file1 = getFile("file1");
QFile *file2 = getFile("file2");
QFile *file3 = getFile("file3");

Grantlee::Tee *tee = new Grantlee::Tee(this);

tee->appendTarget(file1);
tee->appendTarget(file2);
tee->appendTarget(file3);

// Now write to Tee. readAll for demostration but is not
// memory efficient.
tee->write(sourceDevice->readAll());

Want to write to a log file while also writing to a QTcpSocket? Easy:

QIODevice *sourceDevice = getSourceData();
QTcpSocket *output = getOutput();
QFile *logFile = getLogfile();

Grantlee::Tee *tee = new Grantlee::Tee(this);

tee->appendTarget(output);
tee->appendTarget(logFile);

tee->write(sourceDevice->readAll());

An important point to grasp is that Grantlee::Tee implements QIODevice. That means that you can write to a Tee (and therefore multiple targets) just as easily as you can write to a single target.

This can be made easier (and asyncronous) using Grantlee::Pump:

tee->appendTarget(output);
tee->appendTarget(logFile);

// Replaced with Pump.
// tee->write(sourceDevice->readAll());

Grantlee::Pump *pump = new Grantlee::Pump(this);
pump->setSource(sourceDevice);
pump->setTarget(tee);

Grantlee::Pump listens to the readyRead signal and writes all available data in the source to the target until there’s no more left. That means that data can be fed into Tee in smaller chunks and it is not required to readAll data into memory up front.

Compression is equally possible. Here’s an example where a Grantlee::Template is rendered to a QIODevice which is piped through a compressor, and then a TcpSocket:

QTcpSocket *socket = getSocket();
Grantlee::Compressor *compressor = new Grantlee::Compressor(this);
compressor->setTarget(socket);

Template t = engine->newTemplate("main.html");
Context c = getContext();
// The string template is rendered to the compressor, which writes to the socket. 
t->render(compressor, c);

Down to a Tee

Several of these classes are already in various stages of being written, getting tests and documentation. For now I’ve added Tee to a volatile branch of Grantlee destined for some future release. Along with that all of the infrastructure for a third library in Grantlee has been added: documentation, CMake infrastructure, test framework etc. Grantlee::Tubes will continue growing in the coming days.


Follow

Get every new post delivered to your Inbox.