Praveen sent me a link to qwiki today – it’s another search engine hoping to make it big. From what we can tell, they search only Wikipedia. The examples are unclear – it only shows one result per query.
They create a slideshow of pictures from the Wikipedia entry (and maybe add some from elsewhere) while speech synthesis is used to read some of the Wikipedia page. I took a look at the Monet and Great White Shark examples and found the presentation generally useful.
The examples I checked seemed like a great alternative for children writing book reports and such, who might be intimidated by raw Wikipedia text and structure. I’d even consider using it myself for a cursory glance at something if the voice were better (I’ve heard better speech synthesis) and if there were a volume control. There were a few concerns I noted:
- When it talks about a particular time of Monet’s life, are the images ones from that period? If a human were reading a script about Monet, I’d say it’s reasonable to expect coordination between the images and speaking. But this is a pretend human so I don’t know what to expect.
- The Great White Shark query highlights a problem in using straight Wikipedia text – the first or second sentence is often a long list of alternate words for the entity. It seems like a great opportunity for some NLP, or perhaps they could use Simple English Wikipedia.
- This could be great for music if it did playback – imagine contrasting the sound of the harpsichord with the piano, or comparing/contrasting composers (though who you compare to depends on the intended message – compare to the people of their time or compare to the user’s familiarity).