[FFmpeg-devel] New FATE web interface

Michael Niedermayer michaelni at gmx.at
Tue Nov 20 05:02:51 CET 2012

On Tue, Nov 20, 2012 at 12:32:52AM +0100, Burek Pekaric wrote:
> Recently, we were looking for the way to add a new functionality to fflogger (irc bot on IRC devel channel), which would let us know if any FATE machines have been frozen for a while, to report it back in a periodic message. During that process I've analyzed the web interface that generates FATE html page (fateserver), written by Mans Rullgard, and figured out that it does a lot of file decompressing on EACH page request, which is really inefficient/slow. This was also confirmed by michaelni, who had to add a simple hack/patch just in order to make it a little bit more responsive than it previously was.
> Now, I'm thinking about a new (and hopefully better) approach on this and that's the main reason I'm writing this message.
> The new approach could look like this. A FATE client submits its data to the FATE server, which digests the submitted data and creates appropriate digested files + compressed logs (compile/configure/tests). These digested files would be used as a starting point to create html, rss, csv pages (or any other kind of pages that we might need), so that cpu intensive operations on server are done only 1 time and not at every page request. Tools that generate html/rss/csv/... pages out of digest files would be available as external (probably cmd line) tools, which would allow to regenerate output pages again if needed.
> This would make it available for rss, xml, csv and other types of outputs, not just html - for easy parsing by bots and other client software, also it would make serving html pages a lot quicker by serving static html/rss/csv/... pages.
> If you like the idea, please comment on it and also, more importantly, please suggest what output formats/layouts do we need (html, rss, csv, xml, txt, pdf, ...), because, based on that, we can figure out easily what is the common data in all those formats, to be able to design the digest files more efficiently.

> I'm willing to work on this and I have time and knowledge to do so, so I'm not asking anyone else to do this for me, but I'm only looking for your suggestions to make all this as good as it can be.

One suggestion that should be simple is that the server should use its
own time when it receives submissions and not the clients time to
calculate the "how long ago was that submission"
The reason is that some more odd platforms running in virtual machines
have the tendency to loose time and so for example we have some
clients that say their latest results are from 2 days ago while really
they submited them less than an hour ago
installing ntpd on all the clients solves this too but its
more work i suspect ...

Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

No snowflake in an avalanche ever feels responsible. -- Voltaire
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20121120/7d37fcdd/attachment.asc>

More information about the ffmpeg-devel mailing list